After you click "Create File," it will take a while to compile - you'll get an email when it's ready. You'll need to reenter your password when you go to download the file.
The result is a Zip file, which contains folders for Posts, Photos, and Videos. Posts includes your own posts (on your and others' timelines) as well as posts from others on your timeline. And, of course, the file needed a bit of cleaning. Here's what I did.
Since the post data is a JSON file, I need the jsonlite package to read it.
setwd("C:/Users/slocatelli/Downloads/facebook-saralocatelli35/posts") library(jsonlite) FBposts <- fromJSON("your_posts.json")
This creates a large list object, with my data in a data frame. So as I did with the Taylor Swift albums, I can pull out that data frame.
myposts <- FBposts$status_updates
The resulting data frame has 5 columns: timestamp, which is in UNIX format; attachments, any photos, videos, URLs, or Facebook events attached to the post; title, which always starts with the author of the post (you or your friend who posted on your timeline) followed by the type of post; data, the text of the post; and tags, the people you tagged in the post.
First, I converted the timestamp to datetime, using the anytime package.
library(anytime) myposts$timestamp <- anytime(myposts$timestamp)
Next, I wanted to pull out post author, so that I could easily filter the data frame to only use my own posts.
library(tidyverse)
myposts$author <- word(string = myposts$title, start = 1, end = 2, sep = fixed(" "))
Finally, I was interested in extracting URLs I shared (mostly from YouTube or my own blog) and the text of my posts, which I did with some regular expression functions and some help from Stack Overflow (here and here).
url_pattern <- "http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+" myposts$links <- str_extract(myposts$attachments, url_pattern) library(qdapRegex)
myposts$posttext <- myposts$data %>% rm_between('"','"',extract = TRUE)
There's more cleaning I could do, but this gets me a data frame I could use for some text analysis. Let's look at my most frequent words.
myposts$posttext <- as.character(myposts$posttext) library(tidytext) mypost_text <- myposts %>% unnest_tokens(word, posttext) %>% anti_join(stop_words)
counts <- mypost_text %>% filter(author == "Sara Locatelli") %>% drop_na(word) %>% count(word, sort = TRUE) counts
## # A tibble: 9,753 x 2 ## word n ## <chr> <int> ## 1 happy 4702 ## 2 birthday 4643 ## 3 today's 666 ## 4 song 648 ## 5 head 636 ## 6 day 337 ## 7 post 321 ## 8 009f 287 ## 9 ð 287 ## 10 008e 266 ## # ... with 9,743 more rows
These data include all my posts, including writing "Happy birthday" on other's timelines. I also frequently post the song in my head when I wake up in the morning (over 600 times, it seems). If I wanted to remove those, and only include times I said happy or song outside of those posts, I'd need to apply the filter in a previous step. There are also some strange characters that I want to clean from the data before I do anything else with them. I can easily remove these characters and numbers with string detect, but cells that contain numbers and letters, such as "008e" won't be cut out with that function. So I'll just filter them out separately.
drop_nums <- c("008a","008e","009a","009c","009f") counts <- counts %>% filter(str_detect(word, "[a-z]+"), !word %in% str_detect(word, "[0-9]"), !word %in% drop_nums)
Now I could, for instance, create a word cloud.
In addition to posting for birthdays and head songs, I talk a lot about statistics, data, analysis, and my blog. I also post about beer, concerts, friends, books, and Chicago. Let's see what happens if I mix in some sentiment analysis to my word cloud.
library(reshape2)
counts %>% inner_join(get_sentiments("bing")) %>% acast(word ~ sentiment, value.var = "n", fill = 0) %>% comparison.cloud(colors = c("red","blue"), max.words = 100)
Once again, a few words are likely being misclassified - regression and plot are both negatively-valenced, but I imagine I'm using them in the statistical sense instead of the negative sense. I also apparently use "died" or "die" but I suspect in the context of, "I died laughing at this." And "happy" is huge, because it includes birthday wishes as well as instances where I talk about happiness. Some additional cleaning and exploration of the data is certainly needed. But that's enough to get started with this huge example of "me-search."
Great post! I'm currently trying to do the same with private messages and would love to see another post on how to extract those.
ReplyDeleteGreat.... But for french people, the encoding of the facebook json posts file seems problematic for fromJSON....
ReplyDeleteOh no! I wonder if treating it as text might help deal with the different encoding and characters. The readtext package has a way of reading JSON: https://cran.r-project.org/web/packages/readtext/vignettes/readtext_vignette.html There's another JSON library for R, rjson that might help: https://cran.r-project.org/web/packages/rjson/index.html
DeleteHi, I think this is really cool. Thanks for the comprehensive code which I have used it on my own FB data. However, the last part on sentiment analysis of positive and negative words, the words don’t seem to be mine.
ReplyDeleteReally great post. It gives me complete analysis of my own facebook Data. Thanks for the post.
ReplyDeleteCan I do the similar sort of analysis like we can do with Twitter Data