Wednesday, November 30, 2016

Feed the Trolls, Tuppence a Bag

Actually, don't - really don't - feed the trolls. But if you're going to, try to collect some data from them while you're at it. For anyone who regularly reads the comments, you've likely witnessed the same thing the author of this post, Christie Aschwanden, discusses:
I’d just written a short article that began with a quote from the movie “Blazing Saddles”: “Badges? We don’t need no stinkin’ badges!” After the story published, I quickly heard from readers explaining that, actually, the quote was originally from an earlier movie, “The Treasure of the Sierra Madre.” The thing was, I’d included that information in the article.
In fact, research suggests many people share articles without actually having read them. It seems pretty likely they comment on them without fully reading either. These frustrating incidents got Aschwanden thinking - what makes people comment on an article? To answer this question, she analyzed comments on FiveThirtyEight and collected survey data from 8,500 people. My only complaint is that, though they had a really large sample to work with, their key question was open-ended, so they randomly sampled 500 to qualitatively analyze and categorize. (It would have been awesome if they could have done something with natural language processing - but I digress). Here's what they found from their main question - why people comment:


The top category was to correct an error - this might explain why so many people comment without seeming to have read the article. Either they jumped down to the comment section as soon as they read what they thought was an error (therefore missing information later on) or are so fixated on what they feel is an inaccuracy, they stop really comprehending the rest of the article. They did include a similar close-ended, multiple response item later on that includes the full sample, and the top category was related to "correcting an error" - people are most likely to comment when they know something on the subject that wasn't in the article (although, as demonstrated in Aschwanden's stories, sometimes that information is there):


She offers a few explanations for some of the unusual commenting behavior, including my old pal, the Dunning-Kruger effect. She also reached out to some of FiveThirtyEight's top commenters. Interesting observation (that I'm just going to throw out there before I wrap up this post, because I'm more interested in what you guys think of this): most of the survey respondents (over 70%) and all of the top commenters were men. Thoughts? Speculation on why?

A New Approach to Marketing with Data

Marketing researchers have been using data to influence the direction and sometimes content of advertisements for a long time. But this might be the first time I've seen a company use data to directly generate ad content: Spotify has created a new global ad campaign highlighting some of its users listening habits. The results are pretty brilliant:




They sign off many of the ads with "Thanks, 2016. It's been weird." Yes, it has.

Tuesday, November 29, 2016

In Response to Trump's Flag-Burning Tweet

I'm seeing lots of discussion on Facebook today, in response to a tweet by Trump that flag-burning should be a crime, and people convicted of flag-burning should lose their citizenship. In response, I have three quotes for you.

First, let's look back at the Bill of Rights:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances. (emphasis added)
Now, from the Supreme Court ruling in Texas v. Johnson (1989):
Under the circumstances, Johnson's burning of the flag constituted expressive conduct, permitting him to invoke the First Amendment.
Finally, in response to Trump's insistence that flag-burning could be punishable as treason, I quote once again from the Constitution, this time from Article III, Section 3:
Treason against the United States, shall consist only in levying war against them, or in adhering to their enemies, giving them aid and comfort.
So, the tl;dr - flag burning is not illegal. It's free speech, and no law can be passed to make it illegal unless the Supreme Court reverses that ruling. So any new law would be struck down as unconstitutional. In the unlikely event that the Supreme Court reverses its ruling, any law that calls flag-burning treason would also be struck down as unconstitutional.

And that, my friends, is how it works.

On Memory and Dogs

I've been a dog person pretty much my whole life. Growing up, we always had dogs and I can't wait to have a dog myself. Anyone who has had a dog has probably made more than one comment about their dog's memory. There are a variety of things your dog remembers: name, home, who you are, and so on. But when something negative happened in your dog's life, you probably also commented that s/he wouldn't remember it. But a new study suggests dogs may remember events after all.

As a quick recap, there are different kinds of memory. Semantic memory refers to knowledge and information; an example of a semantic memory for me is knowing the different kinds of memory. Episodic memory refers to events, things that have happened in your life; for me, an example would be remembering that I've written posts about memory before. The two are obviously connected, and influence each other. A memory of an event (episodic memory) may teach you a lesson or rule for living (semantic memory). And remembering that I've written posts about memory before (episodic memory) includes remembering the content of those posts (semantic memory).

The researchers examined episodic memory in 17 dogs using an unexpected recall task. If you know you're going to be expected to recall something, you "memorize" it, meaning committing it to semantic memory (also referred to as explicit encoding - you stored it because you know you'll need it later). But if you don't expect that you'll have to recall the information, when you are suddenly asked to recall it, you'll draw on your episodic memory (also referred to as incidental encoding - you stored it even though you didn't expect to need it). They tested this same phenomenon in dogs using a "Do As I Do" task:
Dogs were first trained to imitate human actions on command. Next, they were trained to perform a simple training exercise (lying down), irrespective of the previously demonstrated action. This way, we substituted their expectation to be required to imitate with the expectation to be required to lie down. We then tested whether dogs recalled the demonstrated actions by unexpectedly giving them the command to imitate, instead of lying down.

They found that dogs were able to imitate even when the command was unexpected, though their success rate decreased with longer recall periods (such as asking a dog to remember something from an hour ago - this is a test of memory decay, the loss of a memory as time since the event increases). So they were less able to imitate after a 1 hour delay, but some still could imitate.

Dogs may not be able to hold memories as long as humans can, but these results suggest that dogs can hold episodic memories: "To our knowledge, this is the first time that a non-human species shows evidence of being able to recall complex events (i.e., others’ actions) without motor practicing on them during the retention interval—thus relying on a mental representation of the action that has been formed during incidental encoding, as assessed by an unexpected test."

"Okay," you say, "my dog can remember events. So what?" George Dvorsky, over at Gizmodo, interviewed the study researchers, where they discuss that episodic memory is connected to self-awareness:
As noted, episodic memory has been linked to self-awareness, which is the ability to see oneself as an entity that’s separate and different from others. “So far no test has been successfully applied to study self-awareness in dogs,” Fugazza told Gizmodo. “We believe that our study brought us one step closer to be able to address this question.”

Monday, November 28, 2016

A Champion for Privacy

One of my favorite quotes from The Social Network is, "The Internet's not written in pencil, Mark, it's written in ink." When we post something, it's not easy to make it go away. Even if you delete the content, there are many ways your content could be around for a very long time, such as sites that archive old web pages, and downloads and screenshots by users. So what happens when someone posts something of yours - a very private photograph - for the world to see, save, and share? And more importantly, who is the champion for that person who has had their privacy violated and their intimate life shared?

Enter Carrie Goldberg, a Brooklyn attorney whose practice specializes in sexual privacy, and who is fighting against what has become known as "revenge porn" - the most common example of such is sharing naked photos of an ex, but can also include sharing personal contact information and publishing ads on hookup services, purporting to be from the target, or recording and sharing illegal acts, such as sexual assault. And laws against these acts, known as nonconsensual porn laws, are also being used to charge people who steal private photos from people they don't know, such as in 2014 when Ryan Collins hacked into devices owned by Jennifer Lawrence and other celebrities. Collins was sentenced to 18 months in jail for his crimes.

The article follows one of Goldberg's cases, but also includes lots of attention to Goldberg's stance on the issues, and her approach in working with her clients:
Goldberg tries to impress on her clients that they should not feel ashamed. I once asked her how she responds to the argument that people who value their privacy should not send naked pictures in the first place. Goldberg replied that this was judgmental and reductive. She mentioned the case of Erin Andrews, the former ESPN reporter, who was filmed, without her knowledge, by a man staying in an adjoining hotel room. “Are you just supposed to never take your clothes off?” she said. “You can’t get naked, you can’t take a shower?” She spoke of upskirting—the voyeuristic practice of taking unauthorized pictures beneath a woman’s dress. “Are you never supposed to go out in public in a skirt?” Goldberg said. “Or what about images where somebody’s face has been Photoshopped onto somebody else’s naked body? What’s getting distributed isn’t necessarily images that were consented to in the first place. That’s why it’s the distribution you have to focus on.”

Goldberg went on, “But, even if you did take a naked picture and send it to somebody, that’s not necessarily reckless behavior. That’s time-honored behavior! G.I.s going off to war used to have pics of their wife or girlfriend in a pinup pose. It’s often part of intimate communication. It can be used as a weapon, but, the fact is, almost anything can be used as a weapon.”
Legal scholar Danielle Citron, who wrote a book called Hate Crimes in Cyberspace, argues these invasions of privacy can be considered civil rights violations, because these attacks disproportionately affect women and minorities and can have long-term impacts on their personal and professional lives. In fact, in some of Goldberg's work with students who were victimized (or bullied because of the online content) at school, she files complaints with school offices for civil rights, as well as Title IX coordinators. Unfortunately, it seems these cases are only going to increase in frequency and severity:
And, since the election of Donald Trump, she says, she’s seen a “drastic uptick” in people seeking her firm’s help—evidence of what she worries is a “new license to be cruel.”
Please, readers, be kind to each other. And no matter how bad someone hurts you, be careful what you post online. If someone sent you private photos, it means they trusted you. Don't betray that trust.

Schiaparelli Fall Down, Go Boom

Back in October, the European Space Agency’s Schiaparelli lander was supposed to touch down gently on the surface of Mars:


Instead, it free fell, reaching speeds of 185 mph, before crashing into the surface. And now, the ESA may know why:
As it was making its slow descent, Schiaparelli’s Inertial Measurement Unit (IMU) went about its business of calculating the lander’s rotation rate. For some reason, the IMU calculated a saturation-maximum period that persisted for one second longer than what would normally be expected at this stage. When the IMU sent this bogus information to the craft’s navigation system, it calculated a negative altitude. In other words, it thought the lander was below ground level. Ouch.

That fateful miscalculation set off a cascade of despair, triggering the premature release of the parachute and the backshell, a brief firing of the braking thrusters, and activation of the on-ground systems as if Schiaparelli had already reached the surface. This all happened while the vehicle was still two miles (3.7 km) above ground, causing a catastrophic free fall that sent the lander plummeting downward at 185 mph (300 km/h).
And here's the expensive crater it left behind:


As you may recall, this mission was simply a dry run to demonstrate the technology and identify/correct any bugs. And the behaviors that led to the miscalculation have been replicated in computer simulations, meaning they could be correctable in time for the next mission in 2020. A full report of the investigation into the crash and the conditions leading up to it, is supposed to be out early next year.

Saturday, November 26, 2016

Words, Words

Just a few hours ago, I successfully completed National Novel Writing Month, reaching 50,500 words! The book isn't done yet, but I know where I'm going and what I still need to add. And I'm thrilled to have met the goal because now each contribution to the novel after this is just icing on the cake.

Because I've been away from work for almost a week, and I'm finished with my meta-analysis course, I must be going through statistics withdrawal: as soon as I finished I was super-excited to look at the data, that data being my word count over the last month. Here's a graph of my word count by day:


And it's only going to go up from there! I definitely had days where it was difficult to find time to write, or I just wasn't feeling up to it, so I would spend that time outlining what I was going to write next, making notes about things I needed to research/fill in, and occasionally re-reading what I'd already written. It's kind of hard to tell on the graph above, so here's another way to graph the data, where good days and bad days are a bit easier to see:


So there are a few days in there where I didn't make much progress, which often occurred right after a day of great productivity. I know exactly what happened there. Sometimes I would have lots of free time and I would write as much as I could. The next day, if I wasn't feeling up to writing, I would tell myself not to worry, because I wrote a lot the day before.

I would love to look back over the month at different events, moods, and so on, and look at how that related to word count. Another day perhaps. For now, I'm celebrating my victory with good beer, and hanging out with my brother and cousin. I'll be on the road tomorrow, so probably won't get back to regular posting until Monday.

Thursday, November 24, 2016

Happy Thanksgiving

Happy Thanksgiving, readers! I'm focusing my time today on family as well as squeezing in some NaNoWriMo writing. So rather than a regular blog post, I thought I'd post something funny and Thanksgiving-related. On that note, here's a video of Irish people tasting Thanksgiving food (and a Trump joke that just sneaks in there):

Wednesday, November 23, 2016

On Digital Media, Fluency, and Fact-Checking

Blogger changed their format for viewing the blogs I follow as well as accessing my own blog.


It used to be that my blog was listed at the top, with a few buttons underneath it, to add a new post, access previous posts, and view the blog itself. Underneath that was my blog list, an RSS feed of recent posts from the blog I follow.

Now, the default screen is the "posts" view with a button to write a new post, and a button to access my reading list. I usually log on to Blogger a few times a day to look at my RSS feed, and will frequently do so even if I'm not planning to write a post. So the new format is great for people who only use Blogger for blogging, but not so great for people like me who use it to track favorite blogs.

My initial response is that I don't like it, but I know that's because it's different and therefore, it's taking me a little bit longer to access things I used to be able to access with little thought. I've blogged before about what happens when "thinking feels hard" - in cognitive psychology, we refer to the ease or difficulty of thinking as "processing fluency" and we refer to the conclusions we draw from monitoring our thinking as "metacognition." So I'm completely aware that's the reason for my initial, knee-jerk reaction. I prefer to put the thinking into my posts themselves, as opposed to getting to a blank post template.

Fluency can explain a lot of reactions to information. Information that is easy to read, makes us feel good, or aligns with our preconceived notions is more likely to be believed. This could be why various sites, such as Facebook, are trying to limit posts from fake news sources. And a recent study out of Stanford University offers some support for these steps:
Some 82% of middle-schoolers couldn’t distinguish between an ad labeled “sponsored content” and a real news story on a website, according to a Stanford University study of 7,804 students from middle school through college. The study, set for release Tuesday, is the biggest so far on how teens evaluate information they find online. Many students judged the credibility of newsy tweets based on how much detail they contained or whether a large photo was attached, rather than on the source.

More than two out of three middle-schoolers couldn’t see any valid reason to mistrust a post written by a bank executive arguing that young adults need more financial-planning help. And nearly four in 10 high-school students believed, based on the headline, that a photo of deformed daisies on a photo-sharing site provided strong evidence of toxic conditions near the Fukushima Daiichi nuclear plant in Japan, even though no source or location was given for the photo.
Obviously, a better step would be to teach critical thinking skills, so that kids can determine for themselves what information should be trusted. But, as the article points out, fewer schools have librarians who would teach students research skills, and increases in standardized curriculum and assessment to ensure students are performing at grade level means there is no longer extra class time that could be spent on media literacy and critical thinking skills. This places the burden of that instruction on parents, who may not be any better at recognizing fake v. legitimate news.

This is the reason for post-its on our parents' computers that simply say, "Check Snopes first."

Tuesday, November 22, 2016

It's All About Popular

Popularity gets you far. Being liked by others means people will go out of their way to please you, and try to link themselves to you in some way, so that they too can benefit. On the other hand, we distance ourselves from unpopular people, to try to salvage our own self-esteem and sense of belonging. In social psychology, two concepts get at this notion of linking ourselves to the popular and successful and distancing ourselves from the unpopular and unsuccessful: basking in reflected glory (BIRGing) and casting off reflected failure (CORFing), respectively. So what happens when the leader of the US is so unpopular?

FiveThirtyEight explored this notion of popularity with regard to the recent presidential election. It shouldn't come as a surprise, based on his treatment in the media, that Donald Trump is the least-liked presidential candidate in recent history. What is unusual is that he won despite this fact:
[I]t would be wrong to look at the 2016 election results and conclude that favorability ratings are irrelevant. Trump actually did about as well nationally as you’d expect, given his and Hillary Clinton’s favorability ratings. She was a little more popular than he was and she will probably win the national popular vote by a couple of percentage points. In state after state, people who had a favorable view of Clinton generally voted for her, and people who had a favorable view of Trump generally voted for him.

But here’s the deciding factor: The group that made the difference turned out to be people who disliked both candidates. They swung toward Trump, giving him the White House.
Gee, thanks, guys.

But now that Trump is the president-elect, he needs to get ready to govern and then, you know, actually govern. And he's doing this with a net favorability rating of -13. For comparison, Obama's post-election favorability rating was +13. Even George W. Bush's post-election favorability rating for his second term was positive, +9. And this last fact is important because Bush faced major struggles with enacting his agenda during his second term. People aren't motivated to help you out if they perceive you as unpopular, because they risk tarnishing their own reputation as a result.

Now, it's likely that Trump's favorability rating will improve after his inauguration, a relationship that has been observed in recent elections:


Still, even if Trump gets a 20-point bump, the highest post-inauguration bump observed in recent history, that would only get him to +7. The last time an incoming president had that low of a rating was FDR when he entered his third term in 1941. That would be a difficult year for FDR and the US, as in that year the US experienced the bombing of Pearl Harbor - which some leaders blamed on FDR. Of course, it can be argued that FDR won his fourth term because the US was at war, and people don't tend to want a leadership change during that kind of conflict.

It's unclear exactly what will happen during the Trump administration, but what we can guess from past data is that he's going to experience roadblocks, which will only get worse if/when his favorability slides even lower.

Faculty Perceptions of Online Courses

Last night, I turned in my final homework assignment for an online meta-analysis course I was taking. I'm pleased to say I've learned a lot, and have improved my R coding skills, though I have a long way to go to get the level where I want to be. I think for my next online course, I might do something in programming, so I can get the fundamentals down.

This is the fourth course I've taken through Statistics.com. The first two, which were in Item Response Theory and Rasch Analysis, respectively, helped me get my current job as a psychometrician. I'd taken some online courses previously but this paradigm didn't really take off until after I was finished with college. Today, online courses are everywhere, and different institutions offering online-only education have popped up. So it's interesting to see that Gallup just yesterday published the results of a survey of higher ed faculty, on their perceptions about the quality of online courses.

The results are based on an online survey (oh, the irony - just kidding) of 1,671 faculty members from US colleges and universities, which included private, public, and for-profit institutions. While I can't complain about the size of the sample, they sent out almost 23,000 survey invitations, meaning their response rate was only about 7.3% - this is low, even for an online survey. To make the obtained sample more similar to the population they are trying to represent, responses were weighted by institutional characteristics (such as public/private, 2-year/4-year degree offerings, enrollment size, and geographic region). What this means is that if I received fewer responses from public institutions than I would expect based on their proportion in the population, I would weight each of those responses a little more than I would responses from private institutions.

Overall, only 19% of higher ed faculty agree or strongly agree that online courses are similar in quality to traditional courses. About a third of respondents said they'd taught online courses before, and this experience moderated perceptions of the quality of online courses:
These faculty [who have taught online courses] are increasingly optimistic about the equality of online and in-person courses the "closer to home" the educational context; 32% agree or strongly agree that equal outcomes are achievable for online and in-person courses at any institution, and 52% agree or strongly agree this is possible for the classes they teach. Those who have taught online are four times more likely than their inexperienced peers to agree or strongly agree that equal learning outcomes can be achieved for online and in-person versions of the classes they teach (52% vs. 12%).

While faculty with online teaching experience grow increasingly positive, faculty sans online teaching experience grow increasingly skeptical about the equality of online and in-person learning outcomes the "closer to home" the educational context. Six in 10 (61%) of faculty members without online experience disagree or strongly disagree that equal online and in-person learning outcomes can be achieved at any institution, but more disagree or strongly disagree (78%) they can achieve the same outcomes for the classes they teach.
Faculty members with online teaching experience also recognized the ways online courses can improve their teaching skills, such as by forcing them to find creative ways to get students to engage with the content and make better use of multimedia content.

The full report can be found here.

Monday, November 21, 2016

The Greatest City in the World May Be Regularly Disrupted

As you hopefully know, our nation's capital is Washington, D.C. What you may not know is that before that, the capital was Philadelphia, and before that, it was New York City, which served as the capital under the Articles of Confederation (from 1785 to 1789), as well as briefly under the US Constitution (from 1789 to 1790). In fact, George Washington first took office in New York City.

New York City may, once again, become a regular host to our new president, who has announced that he will spend as much time in NYC as he as able, which people speculate means weekends. And that will likely cause many disruptions to the residents of NYC:
No city on Earth is better prepared to host a presidential visit than New York: The police department works seamlessly with the Secret Service these days, and Manhattanites are used to traffic jams. But to accommodate a more regular presidential presence, the daily routines of ordinary New Yorkers who live in, work near or commute through a five- to 10-block radius of Trump Tower will change. They will not be able to move freely; sometimes they won’t be able to move at all. Whenever a president moves, everything nearby freezes.

This past week, the Secret Service and the NYPD began to draw up a security blueprint to protect the soon-to-be-president while minimizing disruption. (Secret Service spokesman Marty Mulholland declined to comment for this story, citing the agency’s policy of not talking about protective operations.) But shielding Trump from harm is only one of many objectives. Ensuring that he can communicate with the military, world leaders, Congress and the American people at all times is just as vital, and these goals exponentially increase the number of people, objects and systems that surround a modern president.
To give an illustration, the article outlines what happened when the Obamas visited New York City for an evening on May 30, 2009:
The planning began in secret about nine days before the trip, one of the staffers involved told me. A Secret Service agent accompanied a presidential advance-team volunteer to the theater and asked the manager for four tickets — two for VIPs, two for Secret Service agents. Near an exit. “And who would these be for?” the manager asked. “We can’t tell you,” came the apologetic reply.

When the big night arrived, word had gotten out, and the NYPD shut down 44th Street for hours, snarling traffic. The Obamas arrived right on time, but a glitch in screening theatergoers through magnetometers meant that entry lines were long and slow. So the president had to wait outside in the rear while agents checked the remaining audience members for weapons.

In the end, the trip took three airplanes , three helicopters, about 100 federal security agents and dozens of police officers accruing overtime. It required secure telephones in secure rooms inside the theater and the restaurant. All for a night out.
This should be fun...

Saturday, November 19, 2016

Constellations

I'm really not a fan of winter. Among other things, I'm more likely to have migraines during that time of year. Migraines are actually phenomenologically similar to brain freeze (and for this reason, when a non-migraine suffering friend asks me what they feel like, in addition to explaining the neurological symptoms I experience, like blind spots, numbness in my face and hands, and so on, I tell them it feels like brain freeze). The cold will often give me a serious headache. But my favorite thing about the coming of winter is getting to see my old friend, Orion, the first constellation I learned and the one I can identify most easily.

On the subject of constellations, I'm spending a quiet night at home with a bottle of wine, so I'm revisiting one of my favorite movies, Stargate. Which always makes me think of this meme:


The Resistance

I saw this post on Facebook today, and wanted to share. It gives the original author's name, but unfortunately no links or anything to track him down. If anyone has any info on the original author, so I can provide links to his website/social media/whatever, it would be much appreciated:

I listened as they called my President a Muslim.
I listened as they called him and his family a pack of monkeys.
I listened as they said he wasn't born here.
I watched as they blocked every single path to progress that they could.
I saw the pictures of him as Hitler.
I watched them shut down the government and hurt the entire nation twice.
I watched them turn their backs on every opportunity to open worthwhile dialogue.
I watched them say that they would not even listen to any choice for Supreme Court no matter who the nominee was.
I listened as they openly said that they will oppose him at every turn.
I watched as they did just that.
I listened.
I watched.
I paid attention.
Now, I'm being called on to be tolerant.
To move forward.
To denounce protesters.
To "Get over it."
To accept this...
I will not.
I will do my part to make sure this great American mistake becomes the embarrassing footnote of our history that it deserves to be.
I will do this as quickly as possible every chance I get.
I will do my part to limit the damage that this man can do to my country.
I will watch his every move and point out every single mistake and misdeed in a loud and proud voice.
I will let you know in a loud voice every time this man backs away from a promise he made to them.
Them. The people who voted for him.
The ones who sold their souls and prayed for him to win.
I will do this so that they never forget.
And they will hear me.
They will see it in my eyes when I look at them.
They will hear it in my voice when I talk to them.
They will know that I know who they are.
They will know that I know what they are.
Do not call for my tolerance. I've tolerated all I can.
Now it's their turn to tolerate ridicule.
Be aware, make no mistake about it, every single thing that goes wrong in our country from this day forward is now Trump's fault just as much as they thought it was Obama's.
I find it unreasonable for them to expect from me what they were entirely unwilling to give.
Jeremy Zabel
#TheResistance

Friday, November 18, 2016

Gallup to the Rescue

In reference to my comment the other day on Trump's approval rating, Gallup has already come to the rescue with some results. The surprising news is that Trump's approval rating has increased since the election. (Really? Really really? Even after this? Or this? Um, okay...)

But his approval rating is still lower than other presidents-elect:
Donald Trump's favorable rating has improved from 34% to 42% after his election as president. While a majority in the U.S. still have an unfavorable view of him, his image is the best it has been since March 2011 when 43% viewed him positively.

The last three presidents-elect had much higher favorable ratings at comparable time periods than Trump currently does. Then President-elect Barack Obama had the highest favorable rating, 68%, in November 2008. Fifty-nine percent of Americans viewed George W. Bush positively just after the Supreme Court effectively decided the 2000 election in his favor in December of that year. Bill Clinton's favorable ratings were also just shy of 60% after he won the 1992 election.

Trump's ratings lag behind those of other presidents-elect in large part because Democrats' views of him are much worse than the opposition party's supporters' ratings have been in the past. Whereas 10% of Democrats view Trump favorably, 25% of Republicans had a positive opinion of Clinton, 31% of Democrats had a positive opinion of Bush and 35% of Republicans viewed Obama favorably.
In other news, if you want to throw up in your mouth a bit read a real-world example of asymmetric insight (where a person believes his/her ingroup knows more about the issue than the outgroup and knows more about the outgroup than the outgroup itself), read this.

Relatedly, I'm planning to keep PNP going...

Thursday, November 17, 2016

If I Ever Want to Write a Comedy...

... I should just write down some of the conversations between my husband and me. So this just happened:

I walk into the apartment to find my husband in the kitchen in just boxers and a shirt.

Him: I'm so glad you're home.

Me: Oh?

Him: Yeah, I need you to run get something from the house for me. I can't go outside because I'm not wearing pants.

Me: And you can't put on pants?

Him: I could. But I don't want to.


I just got back from the house.

Clothes Make the Man (or Woman)

In season 2 of my favorite show, Buffy the Vampire Slayer, the 6th episode "Halloween" dealt with the pandemonium that ensued when people who rented or purchased their costume from one particular shop turned into their costumes: Willow became a ghost, Xander became soldier, and Buffy became a terrified and helpless 18th century maiden.


This episode had an interesting place in the canon of the show. Xander became a fighter, and Willow had her first taste of being a leader of the Scooby Gang. But more than that, the episode is a great demonstration of a psychological concept. This isn't the first time the show has displayed complex psychological phenomena (see some of my previous posts here). But the particular concept this episode displayed was something I just learned about.

Last night, after I had reached my word count for the day, I decided to take a break, and read a bit more of You Are Now Less Dumb, and came to a fascinating chapter about enclothed cognition. This morning, I found the full-text of the original study:
We introduce the term “enclothed cognition” to describe the systematic influence that clothes have on the wearer’s psychological processes. Providing a potentially unifying framework to integrate past findings and capture the diverse impact that clothes can have on the wearer, we propose that enclothed cognition involves the co-occurrence of two independent factors – the symbolic meaning of the clothes and the physical experience of wearing them.
Basically, wearing clothes of a certain type of person, professional, etc., actually changes the way we think and behave. This effect is more than simply priming. And the researchers who originated this concept, Hajo Adam and Adam Galinsky of Northwestern University, devised an interesting study to demonstrate this difference. They used a white lab coat as their clothing selection. In the first study, participants were randomly assigned to wear or not wear a lab coat while completing a Stroop task. which is a cognitively taxing procedure that involves identifying the font color of a series of color words. In some cases, the font color and the word matched (e.g., red - the correct answer is red) but in other cases, the font color and the word were mismatched (e.g., red - the correct answer is blue). Participants who wore a lab coat made half as many errors as people who did not wear the lab coat.

Next, they conducted two more studies to rule out mere exposure and priming as an explanation of these effects. If priming were a factor, just thinking about doctors/scientists/people who wear lab coats might enhance your performance. So we would expect people who are told just to think about doctors for instance would perform as well as people who wore the lab coat.

In study 2, they ruled out mere exposure. They had three conditions: in two of the groups, people wore the lab coat, but half were told it was a doctor's coat and the other half that it was a painter's smock. In the third condition, a lab coat described as a doctor's coat was on the table near them while they completed the study, but they didn't put it on. All participants then completed a visual search task, which involves identifying differences between two similar pictures; for example:


In this study, people who wore what was described as a doctor's coat spotted significantly more differences than either of the other two conditions. In the third study, they added one more step to the people who merely saw the doctor's coat: they had them write an essay about the meaning behind the coat, how they identify with it, and so on. This time, they found that people who wrote an essay about the doctor's coat identified significantly more differences than people who wore what they thought was a painter's smock (the priming effect), but the people who wore the doctor's coat outperformed them both (the enclothed cognition effect).

What we wear really can make us who we are.

Wednesday, November 16, 2016

Follow-Up On Polls, Probability, and the Election

About a week ago, I blogged my thoughts on what happened with the election polls, which made it look like Clinton would win the election. One explanation I had was social desirability bias:
Finally, we have the possibility that people did not respond the way they actually feel to the poll. This usually happens because of the influence of social desirability. People want to be liked, and they want to answer a question the way they feel the interviewer wants the question to be answered.

People often hold public attitudes that differ very much from their private attitudes. This occurs when they think their private attitude is undesirable and differs from the majority (a concept known as pluralistic ignorance), so they insist they believe the opposite to feel a sense of belonging in the group. In fact, they may become very vocal about their public attitude, to overcompensate and try to prevent people from figuring that their private attitude is completely different.

What this means is, people may have said they were voting for Clinton publicly, but knew they would be voting for Trump privately.
Apparently, this explanation has been offered by many (unsurprising, since it's always a concern in surveying, particularly when dealing with sensitive issues). So many that it has become known as the "shy Trump" phenomenon. But some new analyses over at FiveThirtyEight offers some pretty compelling results to suggest that explanation is incorrect:
So if the theory is right, we would have expected to see Trump outperform his polls the most in places where he is least popular — and where the stigma against admitting support for Trump would presumably be greatest. (That stigma wouldn’t carry over to the voting booth itself, however, so it would suppress Trump’s polling numbers but not his actual results.) But actual election results indicate that the opposite happened: Trump outperformed his polls by the greatest margin in red states, where he was quite popular.

The second reason to be skeptical of the “shy” theory is that Republican Senate candidates outperformed their polls too.

Third, Trump didn’t outperform his polls with the specific group of voters who research showed were most likely to hide their support for his candidacy.

Finally, Trump’s own pollsters told us that there weren’t many shy Trump voters by Election Day. A few months before the election, internal polling showed Trump getting about 3 percentage points more support in polls conducted online or by automated voice recording than in live calls, according to David Wilkinson, data scientist for Cambridge Analytica, a data-analytics firm that conducted polling for the campaign. That suggests some Trump supporters were reluctant to reveal their true preference to a telephone interviewer. But in polls conducted just before Election Day, that 3-point gap had narrowed to just 1 or 2 points.
Full details and pretty figures at the link above. I maintain my argument that Trump is one of the most divisive candidates in history, and will likely go on to be the most divisive president in history. Though as a citizen I'm terrified, as a statistician, I can't wait to take a look at approval rating data. If Trump doesn't deliver on some of his big campaign promises (e.g., the dumb-ass wall idea), he'll piss off his base. And some of his choices for staff have angered both Democrats and mainstream Republicans. At the end of it all, is anyone going to like this guy?

Football, Probability, and Two-Point Conversions

Sunday night, I did something I haven't done for a long time: I watched a football game. Specifically, the Cowboys-Steelers game. I didn't do this on purpose, and I should caveat that I didn't watch the whole game. My husband and I went out to dinner. The restaurant was crowded but there was plenty of room in the bar, so we ate there. On the bar TV, they were showing the Cowboys-Steelers game. I started paying attention to the game when I saw one of the teams going for a 2-point conversion, which I commented that I didn't see very often. But there's good reason for NFL teams to do more of that, according to some statistical analysis from FiveThirtyEight:
According to ESPN Stats & Information Group, there have been 1,045 two-point conversion attempts since 2001, with teams converting 501 of those tries. That’s a 47.9 percent conversion rate; given that a successful attempt yields 2 points, that means the expected value from an average 2-point try is 0.96 points.

Interestingly, that’s almost exactly what the expected value is from an extra point these days. Since the NFL moved extra-point kicks back to the 15-yard line last season, teams have a 94.4 percent success rate, which means that an extra point has an expected value of between 0.94 and 0.95 points.

This means that, all else being equal, the average team should be indifferent between going for two or kicking an extra point. Unless the game situation (i.e., late in the second half) or team composition (e.g., a bad kicker, or an offense or an opposing defense that is very good or very bad) changes the odds considerably, the decision to go for two or kick an extra point shouldn’t be controversial. In the long run, things will even out, because the expected value to the offense is essentially the same in both cases.
As an aside, I should mention statisticians love sports. There are mounds and mounds of data online, ripe for complex modeling, and sports are one of the few topics where, after we run our models and make our predictions, we can get accurate data to see how well the model held up. Sports isn't the only situation where that is true, obviously (election polls are another example). But for many of the topics I study, where I'm trying to understand attitudes, information processing, behavior, and so on, a lot of the predictions we make based on our models is of things that aren't easily measured. We rarely get to confirm our results in the same way we can with sports. And to have all that data readily available - meaning I don't have to collect it myself - makes sports research even more fun.

Now, in the case of the Cowboys-Steelers games, the two-point conversion attempts were unsuccessful. In fact, there were six attempts, none of which yielded points. FiveThirtyEight points out: "Before Sunday’s game, according to Pro-Football-Reference.com, the NFL record for combined failed 2-point tries by both teams in a game was four, set four times." So this game set a new record, not in a good way.

But what was good about this game? Relive those moments here. My personal favorite was this gorgeous touchdown, scored by Ezekiel Elliott in the final seconds of the game, which won the game for the Cowboys. Seriously, look at how we just glides right through:

Tuesday, November 15, 2016

There Are No Old Dogs, Just New Tricks

Throughout my adult life, I've run into many people who talk about wanting to learn some new skill: a new language, some topic of study they didn't get to in college, a new instrument, etc. When I've encouraged them to go for it, I often hear the old adage, "You can't teach an old dog new tricks." As a social psychologist with a great deal of training in behaviorism, I know for many things, that just isn't true. It might be more difficult to learn some of these new skills in adulthood, but it's certainly not impossible. In fact, your brain is still developing even into early adulthood. And the amazing concept of brain plasticity (the brain changing itself) means that a variety of cognitive changes, including learning, can continue even past that stage.

A new study in Psychological Science examined training in different skills, comparing individuals from ages 11 to 33, and found that some skills are better learned in adulthood. They included three types of training: numerosity-discrimination (specifically in this study, "the ability to rapidly approximate and compare the number of items within two different sets of colored dots presented on a gray background"), relational reasoning (identifying abstract relationships between items, sort of like "one of these things is not like the other"), and face perception (identifying when two faces presented consecutively on a screen are of the same person or different people). They also measured verbal working memory with a backward digit-span test, which involved presentation of a series of digits that participants had to recall later in reverse order.

Participants completed no more than 1 training session per day over 20 days, through an online training platform. They were told to use an internet-enabled device other than a smartphone (so a computer or tablet). The tasks were adaptive, so that performance on the previous task determined the difficulty of the next task. To compare this training program to one you're probably familiar with, what they created is very similar to Lumosity.

They found that training improved performance on all three tasks (looking at the group overall, controlling for the number of training sessions the participants actually completed). But when they included age as a variable, they found some key differences. The improvements they saw in numerosity-discrimination were due mainly to the results of the adult age group; when adults were excluded, and they only looked at adolescents, the effects became non-significant. The same was also true in relational-reasoning performance. Though there was a trend toward more improvements among adults on face-perception, these differences were not significant. You can take a look at accuracy by skill, testing session, and age group below (asterisks indicate a significant difference):


Another key finding was that there was no evidence of transfer effects - that is, receiving ample training in one task had no impact on performance in a different task. This supports something psychologists have long argued, much to the chagrin of companies that create cognitive ability training programs (ahem, Lumosity): training in a cognitive skill improves your ability in that skill specifically, and doesn't cause any generalized improvement. That's not to say doing puzzles is bad for you - it's great, but it's not going to suddenly improve your overall cognitive abilities.

But the key finding from this study is not only that "old dogs" can learn new tricks, but that for some tricks, older really is better.

EDIT: I accidentally omitted the link to the study abstract. But the good news is, I discovered it's available full-text for free! Link is now included.

Monday, November 14, 2016

For Sentimental Reasons

As part of a small research project I'm hoping to undertake (stay tuned!), I did a little research into some of the sentiment analysis packages available in R. Sentiment analysis involves classifying words and phrases as positively or negatively valenced; people conducting sentiment analysis may also want to delve deeper into the precise emotions being portrayed, such as anger or joy. Software conducting sentiment analysis often uses previously existing lexicon databases, that have already done the work of classifying words as portraying different sentiments. One example is the NRC Word-Emotion Association Lexicon, developed through crowd-sourcing on Mechanical Turk; it offers the following sentiments: negative and positive; and emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust.

But as I was looking into details on sentiment analysis for this project, which will include analysis of writing, I stumbled upon this article about sentiment in fiction - specifically how do commercial novels differ from literary fiction (the "greats") in terms of sentiment? They begin the article by offering a summary of what critics and great writers have to say on the subject, which is that to be considered great fiction, you need to limit sentimentality in your writing. To answer the question more fully, though, they conducted a sentiment analysis of 2,000 published novels across 10 non-mutually exclusive categories: Classics, Mystery, Literary Prize-Winners, Bestsellers, Reviewed by New York Times, Science Fiction, Most Widely Held in Libraries Published Since 1945, Young Adult Fiction, Romance, and Victorian novels:
The more a novel contained strongly positive or negative words (abominable, inept, obscene, shady, on the one hand, admirable, courageous, masterful, rapturous on the other), the higher its score. However complex literary sentimentality may be, the assumption is that in order to be sentimental at a minimum you need a sentimental vocabulary.

Below you see a comparison of the varying levels of sentimental vocabulary in each of our 10 categories. The most telling aspect of the graph is the way the novels of the nineteenth century (labeled “VIC”) represent an altogether different world in terms of sentimentality. These are the novels of Charles Dickens, Mary Shelley, Anthony Trollope, Emily Bronte, and their contemporaries. To give you an idea of what this difference is like, the average amount of sentiment vocabulary in the nineteenth-century novels accounts for just under 7% of all words in a given novel. For prizewinning fiction from the past decade, by contrast, that number is about 5.5%. This means that for a given novel of 100,000 words (about the length of Pride and Prejudice), a reader will encounter on average 1,500 more sentimental words, or about seven and half more per page. That’s an enormous difference from a reader’s perspective. In this sense, one could see our current antipathy to sentimentality as a longstanding reaction to a distinct moment in the novel’s history when emotion reigned.
The other noticeable feature about this graph is the way it is not well-sorted according to our distinctions between “high” and “low” categories (“popular” and “serious” might be another way to label them). Some popular genres like Romance, Young Adult Fiction (YA), and Science Fiction (the latter being surprising to us) do use sentimentality to a higher degree than so-called highbrow novels reviewed in the New York Times (NYT) or those that win literary prizes (PW). (Here the difference is significant though less extreme – Romances for example use about 0.75% more sentimental words than prizewinners, or roughly 3-4 more per page.) On the other hand, the most widely held novels in libraries since 1945 (POST45) use levels of sentiment on par with more popular categories like SciFi and YA fiction, just as our high-brow categories like New York Times novels and Prizewinning novels show no difference with more popular groups like the Bestsellers or the Mysteries.

In other words, up to a certain point, sentimentality does not help us distinguish between ostensibly high-cultural things and low-cultural things (or popular things and serious things).
 So what is their overall message for hopeful authors, like myself?
Our advice to writers? Based on the available evidence, if you want to write one of the fifty most important novels in the next half-century, then by all means avoid sentimental language. But if you want to get published, sell books, be reviewed, win a prize or simply make someone happy, then emote away and just write a good novel.

Meanwhile in North Carolina

Even though Election Day is almost a week ago, in some parts of the country, election results are still up in the air. For instance, in North Carolina, Republican Governor Pat McCrory still doesn't know if he has won or lost his race against Democrat Roy Cooper:
A formal complaint that could affect North Carolina’s tight governor’s race demands 90,000 Durham County ballots be recounted by hand because tabulations were manually entered into the state election system by local officials on election night with “bleary eyes and tired hands.”

Stark says an error in machines that scan ballots caused memory cards to fail, prompting manual tabulation of results into the state system.

The original statewide count shows Democrat Roy Cooper ahead of McCrory by 5,000 votes. But tens of thousands of provisional ballots still must be examined. Counties must submit their final results by Nov. 18.
If you remember, North Carolina was the subject of controversy recently, due to the passage of House Bill 2, which requires individuals to use the public restroom that aligns with their biological sex (i.e., what is listed on their birth certificate). The law was widely criticized for a variety of reasons, and many are calling for it to be struck down. And this particular law has had a significant impact on that race.
Roy Cooper, a Democrat, held a razor-thin lead on Thursday in North Carolina’s bitterly contested race for governor. If it holds, it would be a rare bright spot for his party this week, one that has much to do with Mr. Cooper’s call for repealing a state law limiting transgender bathroom access that has subjected North Carolina to a gale of international criticism, boycotts and cancellations.
At last count, Cooper is in the lead, but it's close, and McCrory has refused to concede.

Sunday, November 13, 2016

A Year of Writing

Recently, I blogged about my progress with blogging this year. Not only am I sitting on 330 posts on my blog, 251 of which are from this year (76%!), I'm currently participating in National Novel Writing Month. The goal is to write 50,000 words in the month of November; you can track your progress and stay in touch with writing buddies through their website, and attend "write-ins" to network with and encourage others.

I'm just past the halfway point in terms of word count (26,962 words) and I'm just a bit short of the halfway point plot-wise. Earlier this year, when I went to Wizard World Comic Con (see posts here and here), I attended a session for writers in which I learned about two different approaches to writing:
  • Discovery writing: Start with ideas about the plot, and just write; focus is on character development (Stephen King and George R.R. Martin are discovery writers)
  • Outline writing: Clear plan about entire novel; focus is on plot development (Orson Scott Card is an outline writer)
I've always been a discovery writer. I love sitting down in front of a blank page and just writing, seeing what words pour out. And I love how you can create a character, but rather than you telling the character what to do, he or she tells you what they should do. Being surprised by something you're writing is one trippy experience.

The problem is I've never finished a novel before, not for lack of ideas. I've written countless short stories and plays, and discovery writing seems to work well for those. But I realized if I wanted a good chance to finish my book, I was going to need a different approach. So I outlined the story, figuring out the big events that need to happen and generating some character descriptions. But I left things pretty broad within each chapter. Essentially I had a map and a notion of where they should be at each mile marker (chapter), but the rest of the journey hadn't been written yet. What I did isn't really pure outline writing or pure discovery writing. It's a hybrid of the two.

Fortunately, I found a description of what I'm doing in the NaNoWriMo website. They use similar concepts to what I've described above but use different terms. Discovery writers are "pantsers" - writing by "the seat of their pants." Outline writers are "planners." And people who do a combination of the two are "plantsers."

The great thing about this approach is I can keep myself on track and keep writing even if I'm feeling kind of blocked. But I still leave things open to develop and my characters have already done things that surprised me. I even created a character in a scene on-the-fly and he added a major development that impacted one of my protagonists. All totally unplanned - it just kind of happened.

Friday, November 11, 2016

Ticklish Rats

In college, I had a pet rat named Lily.


She was one of the best pets I've ever had, personable, playful, and so trustworthy, I could let her run around my apartment unsupervised, knowing she'd come when I called her. I used to play "box" with her, and sometimes, I would flop her onto her back and tickle her belly. She seemed to enjoy it, though I didn't really think I was "tickling" her so much as just playing. But some new research shows that rats really are ticklish:
To find out, they tickled young male rats in a systematic way. First, they tickled the animals on the back, then flipped them over and tickled them on the stomach. That was followed by gentle touching on the back, then front. Next, the researchers tickled the rats on their tails. Finally, they played the hand-chasing game. Each part of the routine lasted for about 10 seconds, followed by a 15-second break.

The rats responded with ultrasonic vocalizations in the range of 50 kilohertz, a pitch with a “positive emotional valence,” according to the study. That frequency is too high for humans to hear, so the researchers transposed the vocalizations to lower frequencies. (Notably, none of the tickles caused rats to make utterances in the 22-kHz range, which are considered “alarm calls,” according to the study.)

In addition to the vocalizations, the rats also reacted to tickles with spontaneous Freudensprünge jumps. These jumps resemble bunny hops, with the front legs and back legs moving in tandem.
What could make this research more adorable? The fact that they have video and audio of a rat being tickled.

Days and Days and Days

How much Netflix do you watch? Depending on how many Netflix binges you're guilty of (and I know I'm guilty of many), you may be interested in knowing just how much Netflix the average viewer watches in a year, a number that has shown a steady increase over the years:
The number of hours of Netflix the average subscriber watches has gone up steadily since 2011, at an average of 16.4% per year. In 2011, using Netflix data, we can estimate that each subscriber watched about 51 minutes of Netflix per day (about 310 hours per year). And while official Netflix data hasn't come out yet for this year, CordCutting.com estimated that for 2016, users are on track to stream 600 hours of content each, on average.

If that's true, it means that the average Netflix subscriber watches about 12 DAYS more Netflix in a year than they did in 2011!
This is a good sign for Netflix's decision to release more original content, which they started doing in 2013. Not surprising, considering that their Marvel Cinematic Universe contributions, as well as the recent show Stranger Things, are some of the best shows I've seen recently. Next on my list is The Crown.

What are you binge-watching at the moment?

Thursday, November 10, 2016

Making It Okay

I've been taking a break from Facebook, only logging in to post updates on some family health issues, but have been easing my way back in. I'm still planning on being far less active in the foreseeable future. But I've seen several posts I completely agree with - that we need to stop telling people who are upset about the election that "it's going to okay." As they said, it downplays their very real terror about what the next 4 years are going to look like. Terror that I share. They say that telling others you're going to "choose love" is an empty promise.

Despite agreeing with those posts, I'm going to say now that it's going to be okay. Not just because I will choose love - and I will; there is too much hate in the world and the election of Trump just reminds us how very real and volatile that hate is - but because I am going to make it okay. I am going to do my part to make things okay. I will be a vocal and active opponent of the terrible policies I envision coming forward. I will fight for your rights, and for mine, when I believe they are being trampled on. I am no longer going to be a passive voter, who tells people who I'm voting for and why, in the hopes of convincing them, or, worse yet, one who only discusses politics with my like-minded peers, but an active participant in the political process. I'll volunteer, I'll canvas, I'll do everything in my power to ensure that Trump is a one-term president. And I realize that I want very much to help the person who defeats him. This is my goal over the next 4 years.

So, yes, I am choosing love, but love with teeth. Love that fights. Love that bleeds. Love that makes sacrifices for the greater good, because that's what love is: sacrifice. And I love this flawed country, not because of some nationalistic pride. Not because I think we're better than everyone else. Not because it's the place I just happened to be born. But because I see the beauty of it, of why it was created. I believe in the American experiment. And I want it to continue.

We are going to make it through this. But only if we work together to ensure that we make it through this.

On Polls and Probability

This election cycle, I followed election forecasts pretty closely. Since everything I was reading had Clinton defeating Trump, I, like many others, thought the election was already decided. And I don't have to tell you that, I, like many others, was completely wrong.

Yesterday, a fellow psychologist I went to college with asked for some thoughts on what happened with the polls, and why they were so wrong. I offered a few thoughts, but did some more thinking about it. So what I'm writing today is a mixture of what I shared then, and what I've added since.

First of all, the election forecast I followed most closely was FiveThirtyEight's. And in his posts, Nate Silver pointed out that the difference in proportions voting for Clinton versus Trump was approximately equal to typical polling error (how much difference we usually see between polling data and the actual results). To explain, when we poll people, we randomly select a sample, and often will also do some stratification to make sure that the sample we get represents the population in terms of key demographic characteristics (things like race, ethnicity, or age group). (For polls where representing the population is very important, such as this one, that definitely occurred.) We do this to try to ensure that, because the people we sample are similar to the population in these characteristics, they're also (hopefully) similar to the population in terms of the thing we are measuring. But we have no way of knowing if that's true, and it is always possible that we'll end up with a sample that is biased just by random chance. Just as you could flip a coin and end up with 5 heads in a row - it doesn't represent the most probable distribution of coin flips, but it can happen just by chance. So you could have all the right controls in place and still end up with a sample that doesn't represent the population. Sadly, that's how probability works.

Another possibility has to do with the tenuous connection between behaviors and behavioral intentions. A behavioral intention is what you plan to do, and the behavior is what you actually do. Though there's a correlation between the two, it isn't one-to-one. Anyone who has intended to go to the gym one day, only to end up binge-watching Game of Thrones instead, knows all too well about the imperfect connection between intentions and behaviors. You may have even convinced yourself that you'll be going to the gym, right up to the point that you realize it's too late for that. The problem is, when we do opinion polling, we are measuring intention, rather than behavior. We're asking people how they will vote. They may intend to vote one way, but can (and do) change their mind right up to the moment they actually carry out the behavior. It's possible that a proportion of respondents had a mismatch between their intention (vote for Clinton) and the actual behavior (vote for Trump). This usually happens when the intention wasn't that strong to begin with, which isn't fully captured in forced choice polling.

So to recap, the problem could be sampling error, or people changed their minds. These are issues we have to deal with all the time in research. The next two possibilities I have are also likely, and occur when the poll itself influences how people respond.

First of all, people who respond to surveys and polls tend to be different than people who do not. This is why we include things like incentives, reminders, and so on, to try to maximize the chance that people who don't usually respond to these sorts of things will decide to respond to this particular one. But when the people who respond are different from the people who don't, we call that selection bias. The conservative party, and Trump himself, have both offered many criticisms of the "liberal media." Some of the big polls were performed by or in collaboration with large media groups, and other polls, people may have perceived as being performed by "the media." If a particular group is already being programed to distrust a particular group, they'll probably be much less likely to respond when that particular group invites them to participate in a poll. People who self-selected to participate may have been people who didn't perceive the polling organization to be part of the "liberal media" or who have trust in those organizations. So the nature of the poll itself could have influenced who responded.

Finally, we have the possibility that people did not respond the way they actually feel to the poll. This usually happens because of the influence of social desirability. People want to be liked, and they want to answer a question the way they feel the interviewer wants the question to be answered. Maybe the respondents who really did want to vote for Trump were embarrassed and didn't want to admit that to the interviewer, especially if they think the interviewer is more likely to be liberal (see just above).

People often hold public attitudes that differ very much from their private attitudes. This occurs when they think their private attitude is undesirable and differs from the majority (a concept known as pluralistic ignorance), so they insist they believe the opposite to feel a sense of belonging in the group. In fact, they may become very vocal about their public attitude, to overcompensate and try to prevent people from figuring that their private attitude is completely different.

Need I say more?
What this means is, people may have said they were voting for Clinton publicly, but knew they would be voting for Trump privately. And this effect may have been stronger in this particular election. Every election cycle, there will be people who strongly criticize one of the candidates. But it felt like many people and organizations were joining in, more than usual. I remember at one point looking at all the negative attention Trump was getting, and worrying that it could actually help him in some way, making him more like the underdog (and everyone loves an underdog). But it didn't occur to me that the more likely effect was that the negative attention might cause people to mask their true feelings about him, leading to inaccurate polls.

There are ways to minimize socially desirable responding, through question wording and/or reminding participants that their answers will be kept confidential. But there's always a possibility to have socially desirable responding, despite the controls.

Obviously, the items on this list are not mutually exclusive. It could have been a combination of any or all of the above.

Wednesday, November 9, 2016

Human's Unique Ability to Rationalize

One of the regular web comics I read, Saturday Morning Breakfast Cereal, posted this cartoon today:


I think the thing I appreciate about SMBC is that it takes complex topics, often psychological in nature, and presents them in a concise, funny way. And it's true, humans have a unique ability (unique as far as we know, anyway) to explain their own behavior. And countless psychological studies have demonstrated those explanations are often wrong.

One of the criticisms of behaviorism is that it oversimplifies behavior, because though we respond to rewards and punishments, the ability to think through those contingencies alters our response. And yet, again and again, researchers have taken complex human behavior and been able to shape something like it in animals, only through reinforcements and punishments, such as, for instance, the superstitious pigeons. This is why many behaviorists argue that factoring in rationalization is adding unnecessary complexity, and the parsimonious approach would be to completely discount that. This was the radical approach Skinner took, and the reason he referred to cognitive psychology as the creationism of psychology.

As I've said before, I don't take the extreme approach of Skinner, and I do believe that cognition is important in understanding human behavior. At the same time, humans are really bad at knowing what is causing them to behave as they do. They seek out things they believe will them happy, not realizing that their happiness is unlikely to be permanently affected by any thing, and that they're terrible at even knowing what will actually make them happy. They make poor decisions, and offer convoluted explanations that fit them into a neat, coherent narrative, even if it means changing the story to contradict reality. They will even go so far as to change their own memories to make things fit their explanation. We feel emotions, often not knowing why, and ascribe them to whoever or whatever is around at that moment. And though we do these things ourselves, we're quick to recognize when others are thinking or behaving irrationally.

This is the reason I have to resist the urge to argue with others, vehemently, when they tell me psychology is just "common sense." Because as psychology has been demonstrating almost since the beginning, common sense is really not all that common or sensical.

The Day After is Darker

It's interesting (annoying? frustrating?) how sometimes your brain can try to give you exactly what dreams it thinks you need to be happy, when the result is almost always the opposite. I dreamed about the election. The first dream was about examining statistical models, working with data, determining the outcome. I woke up and had a lot of trouble getting back to sleep. I finally did, and this time, I dreamed that I was still watching the election returns, except the states kept turning blue, one after another. At the end of it, Clinton had 307 electoral votes (that was the exact number that appeared in my dream). Again I woke up (about 5:45am) and got out of bed and checked the results. And read the most disturbing one sentence horror story ever written: Trump won the presidency.

I was not able to get back to sleep, despite how exhausted I feel.

America, we created this monster. And then we elected him. I hope you're happy. I, however, am terrified. Not just because of what I think his policies will be like (dreadful) or how his speeches will sound (bigly word salads). I'm terrified for my Muslim friends. I'm terrified for my LGBT friends. I'm terrified for women's reproductive rights. I'm terrified that a man who launches into a Twitter battle at the slightest criticism will have access to the nuclear launch codes. I'm terrified that this gives other countries one more reason to hate us. I'm terrified that when Trump is talking to a leader of a country who would rather just bomb us, he'll use his brutish self-absorbed tactics over diplomacy.

I'm terrified he doesn't even know what diplomacy is.

I'm terrified.

Tuesday, November 8, 2016

This is Your Brain on Loneliness

During Blogging A to Z, I talked about parasocial relationships, which refers to our tendency to form social connections with people we haven't met but feel we know, such as through the media. Humans are social creatures, and feeling a sense of connection to others is important. In the absence of real connections, we may form false connections to stay mentally healthy and feel a sense of belonging.

It seems that we not only form connections with other people, we also can form them with inanimate objects. In fact, when we feel lonely, we're more likely to anthropomorphize - ascribe human qualities to nonhuman entities. According to a study by Epley, Waytz, Akalis, and Cacioppo, when we feel lonely, we're more likely to ascribe human qualities to inanimate objects (i.e., alarm clock, battery charger, air purifier, and pillow).


Bartz, Tchalova, and Fenerci replicated this study, and extended it. Participants completed a series of questionnaires on loneliness, self-esteem, and sense of belonging. The researchers also randomly assigned people to think about a certain type of relationship prior to completing the study:
Participants then underwent a manipulation in which they thought of a close, supportive relationship (the close-relationship-prime condition) or a more casual relationship (the control-relationship-prime condition). This was followed by an animacy-perception task (the procedure and results for this task are not reported here) and the anthropomorphizing task.

In the close-relationship-prime condition, we used a well-validated priming procedure from the attachment literature (Baldwin, Keelan, Fehr, Enns, & Koh-Rangarajoo, 1996; Bartz & Lydon, 2004). Participants were asked to recall an “important” and “meaningful” relationship.

Participants were then asked to list six traits that described the person they had in mind. To augment the priming effects, we had participants undergo a brief guided visualization in which they were asked to get a visual image of the person in their minds and remember a time they were actually with the person; prompts included: “What would he or she say to you?” “What would you say in return?” “How do you feel when you are with this person?” and “How would you feel if they were here with you now?” Finally, participants were asked to write a few sentences about their thoughts and feelings regarding themselves in relation to this person.

The control-relationship-prime condition was very similar to the close-relationship-prime condition; however, rather than thinking of a close, supportive other, participants were instructed to recall an acquaintance, defined as “a person you know casually but not someone you would consider to be a close friend . . . someone you sometimes interact with but not someone you know particularly well or someone that you would confide in or turn to for help.”
They found that people who were high in loneliness were more likely to anthropomorphize inanimate objects and pets. But people who were reminded of a close relationship prior to these task were less likely to anthropomorphize. They go on to explain why this finding is important:
This observation provides experimental evidence that anthropomorphism arises from an unmet need for social connection (and when people meet this need, anthropomorphism can be reduced). Although anthropomorphism is one of the more creative ways people try to meet belonging needs, it is nevertheless difficult to have a relationship with an inanimate object. Reliance on such a compensatory strategy could permit disconnected people to delay the riskier—but potentially more rewarding—steps of forging new relationships with real people. These findings highlight a simple strategy that could help get lonely people on the road to reconnection.