Monday, December 18, 2017

Statistics Sunday: Mediation versus Moderation

I had a wonderful but very busy weekend, performing Händel's Messiah twice. Unfortunately, this means I didn't have a chance to sit down and write my Statistics Sunday post until, well, Monday. But hey, the holidays are coming soon, many of my university friends are wrapping up their semesters, and a lot of my coworkers are off this week because their kids are home. So it's kind of virtual Sunday, right?

Today, I wanted to write about two misunderstood concepts: mediation and moderation. Both deal with relationships among 3 (or more) variables, but they tell you very different things and are tested in different ways.

I've blogged before about mediation. Mediation can be thought of as another term for "caused by" or "explained by." You have mediation when the relationship between your independent and dependent variables is caused by or explained by their relationships with a third variable. Specifically, it means your independent variable causes the mediator, which in turn causes the dependent variable. It's like a chain reaction. (Note that you also need to have specific methods to get at this notion of cause, so I'm using these terms more loosely than I should be. But when introducing the concept of mediation, I find it easiest to frame it in terms of cause.)

There are two big ways to measure mediation. One is through 3 linear regressions: 1) effect of independent variable on dependent variable, 2) effect of independent variable on mediator, and 3) effect of both independent variable and mediator on dependent variable. If you observe the following:

  1. Independent variable has a significant effect on the dependent variable (equation 1)
  2. Independent variable has a significant effect on the mediator (equation 2)
  3. Independent variable no longer has a significant effect on the dependent variable, but the mediator has a significant effect on the dependent variable (equation 3)

you have evidence of mediation. Fortunately, you don't have to just eyeball your regression results. You would use the results of these regressions to conduct a Sobel test: check out this great website and online calculator to help with understanding and testing mediation.

The other way to test mediation is structural equation modeling. This would work for simple mediations, like the one described above, but is probably more useful when testing complex mediation - for instance, when you have multiple mediators in your chain reaction.

Moderation, on the other hand, is another term for "depends on." That is, the precise impact your independent variable has on your dependent variables depends on where you fall on the moderator. When I used to teach research methods, I'd often have students discuss what effect they think a certain independent variable would have on a dependent variable.

One example I used was divorce: what impact do they think divorce would have on a child's well-being? (I have to thank a past student for suggesting this topic, since they thought it was something most people have encountered: either directly because their parents are divorced, or indirectly because friends' parents might be divorced.) Partway through discussion, I would ask them what they think that impact depends on; what might change that impact? They always have lots of ideas. It might depend on age - it could have a stronger impact on younger children but less of an impact on high school or college-aged children. It might depend on whether the child has siblings - they thought it would be harder on an only child. As the list grew, I would explain to them that these are moderators. And we would say it as, for example, the effect of divorce on a child's well-being depends on their age.

Moderation is tested with interactions, which you can conduct with a factorial ANOVA or multiple regression, where you would create interaction terms. I usually use the latter method, because it gives you the same results as an ANOVA when all of your variables are discrete, and also can be used with continuous variables, while ANOVA cannot. If you're using the latter, I highly recommend this book by Aiken and West - kind of the bible on interactions in multiple regression.

So, as you can (hopefully) see, moderation and mediation reflect different kinds of relationships. (And if this explanation is unclear or you still have questions, please share them in the comments!) And because these are different kinds of relationships, there are situations where you could test both. Yes, crazy as it sounds, there are such things as moderated mediation and mediated moderation. A post for another day!

Friday, December 15, 2017

The Power of the Human Voice

Human beings are drawn to the sound of human voices. It's why overhearing half of a conversation can be so distracting. It's why DJs will talk over the intro of the song, but make sure they stop before the singer comes in. It's why Deke Sharon and Dylan Bell, two a cappella arrangers, recommend arrangements be kept short (less than 4 minutes).

And new research shows yet another way a human voice can have a powerful impact - it keeps us from dehumanizing someone we disagree with:
[F]ailing to infer that another person has mental capacities similar to one’s own is the essence of dehumanization—that is, representing others as having a diminished capacity to either think or feel, as being more like an animal or an object than like a fully developed human being. Instead of attributing disagreement to different ways of thinking about the same problem, people may attribute disagreement to the other person’s inability to think reasonably about the problem. [W]e suggest that a person’s voice, through speech, provides cues to the presence of thinking and feeling, such that hearing what a person has to say will make him or her appear more humanlike than reading what that person has to say.
They conducted four experiments to test their hypotheses: that dehumanization is less likely to occur when we hear the person speaking their thoughts, rather than simply reading them. It wasn't even necessary to see the person doing the talking - that is, video and audio versus audio only did not result in reliably different evaluations. The authors conclude:
On a practical level, our work suggests that giving the opposition a voice, not just figuratively in terms of language, but also literally in terms of an actual human voice, may enable partisans to recognize a difference in beliefs between two minds without denigrating the minds of the opposition. Modern technology is rapidly changing the media through which people interact, enabling interactions between people around the globe and across ideological divides who might otherwise never interact. These interactions, however, are increasingly taking place over text-based media that may not be optimally designed to achieve a user’s goals. Individuals should choose the context of their interactions wisely. If mutual appreciation and understanding of the mind of another person is the goal of social interaction, then it may be best for the person’s voice to be heard.

This research inspires some interesting questions. For instance, what about computer-generated voices? We know we're getting better at generating realistic voices, but what is the impact when you know the voice is generated by a machine and not another human being? Also, the researchers admit that they couldn't test the impact of visual and audio cues separately. But what if you had an additional condition where you see the person, but their words are displayed as captions instead?

What are your thoughts on this issue? And where would you like to see this research go in the future?

 

Concert Weekend is Almost Here

Tomorrow and Sunday, I'll be performing Händel's Messiah for the 25th and 26th time with my choir, the Apollo Chorus of Chicago. We're getting some great attention in anticipation of our concerts:
You can learn a lot more about Händel and Messiah at a pre-concert talk before Sunday's performance.

And you just might leave the performance happier than when you went in: in psychological research on the effect of mood, we usually play clips of music that reliably put people in either a good or bad mood. One frequently used song to put people in a good mood comes from Händel's Messiah - the Pastoral Symphony, also known as the Pifa, which sets the scene of the shepherds in the field who are about to be visited by angels:

Thursday, December 14, 2017

Statistical Sins: Not Double-Checking Results

In a previous Statistical Sins post, I talked about the importance of knowing one's variables. Knowing the range and source of your variables is necessary to make sure you're using the correct variables in your results. This is an important step in quality control, and really should be done first, prior to running analyses.

But good quality control shouldn't stop there. Results should be double-checked, and compared to each other, to make sure it all makes sense. This sounds painfully obvious, but unfortunately, this step is skipped too often. For instance, check out the results of the Verge technology survey, and specifically one of the glaring issues pointed out by Kaiser Fung on Junk Charts:
Last time, I discussed one of the stacked bar charts about how much users like or dislike specific brands such as Facebook and Twitter. Today, I look at the very first chart in the article.

This chart supposedly says users trust Amazon the most among those technology brands, just about the same level as customers trust their bank.

The problems of this chart jump out if we place it side by side with the chart I discussed last time.

Now, the two charts use different data - the first chart is a "trust" rating scale, while the second is a "like" rating scale. But notice that in the first chart, yellow is said to stand for "No opinion or don't use," while in the second chart, that category is reflected in gray. It seems highly unlikely that people have an opinion on liking something but not trusting that same institution. The two scales would likely be highly correlated with each other. Also, the chart on the left is missing the "somewhat" category, making the rating scale asymmetrical.

What probably happened is that the "no opinion" category was inadvertently dropped from the chart on the left, a mistake that should (could) have been noticed with a thorough review of the results.

I remember getting ready for a presentation once, and going over my slides when I noticed my standard deviations made no sense - they were too small. Cue a mini-panic attack, since I was presenting in 15 or so minutes at that point. I pulled out the printout of my results and noticed I'd accidentally used standard error instead of standard deviation. Fortunately, the room I was presenting in was not being used, and I was able to use the computer to pull up my file and change the values in tables.

When I first started working as a psychometrician, I was introduced to a very involved process of quality control - including having two people start with the same raw data, and going through the whole process of cleaning, creating new variables, and analyzing results, preferably with different analysis programs. Since R was my program of choice, I would usually use that, while my counterpart in quality control would often use SAS or SPSS.

Mistakes happen. This is one reason we have errata published in journals. And online articles can be easily corrected. The Verge would probably do well to fix some of these mistakes.

Wednesday, December 13, 2017

Harry Potter and the Gloriously Unhinged Story

Via Mashable, Botnik Studios, a creative community, just gave us a new Harry Potter chapter, that was written using a predictive algorithm trained on the seven Harry Potter books:


And it's hilarious. Here are a few excerpt:
"What about Ron magic?" offered Ron. To Harry, Ron was a loud, slow, and soft bird. Harry did not like to think about birds.

The password was "BEEF WOMEN," Hermione cried.

"Voldemort, you're a very bad and mean wizard," Harry savagely said. Hermione nodded encouragingly. The tall Death Eater was wearing a shirt that said, 'Hermione Has Forgotten How To Dance,' so Hermione dipped his face in mud.

The pig of Hufflepuff pulsed like a large bullfrog. Dumbledore smiled at it, and placed his hand on its head: "You are Hagrid now."

Tuesday, December 12, 2017

Roy Moore's Interview

Each day, we're hearing of more men and women coming forward to talk about inappropriate behavior from some of the most powerful men in the country. And while in many cases, those accusations are being treated as serious, in one instance, the reaction is just getting more and more tone-deaf. (Or perhaps I should say "Moore and Moore tone-deaf.")

In a move that I was absolutely certain was satire when I first heard about it, Roy Moore sat down with 12-year-old Millie March for an interview. The interview was arranged by a Pro-Trump group created by former Breitbart staffers. The goal of the move is to show that Moore can be in the same room as a child and not be creepy or assault her, right?

Dear god, where to begin on this one? Sure, Moore is on his best behavior when the cameras are rolling. But the issue brought forward with all of these accusations is a penchant for these powerful men to treat women like objects, to use them as means to an end. Is that any different than what is happening in this interview? Millie isn't being treated as a person; she's a prop. A bargaining chip used to get what Moore and this Pro-Trump group want - for Moore to be elected. Sure, he didn't assault or harass her. But he and everyone else involved in setting up that interview still objectified her.

Thankfully, I'm not the only one who is disgusted by this stunt:
On Twitter and elsewhere, people were quick to point to the uncomfortable decision to use a 12-year-old girl for a campaign push.

Democratic strategist Paul Begala called it “appalling” and “shocking.”

“The fact that he’s accused of sexual assaulting a 14-year-old girl, would sit down and do an interview with a 12-year-old, when he’s not talking to any journalists—it’s like he’s rubbing Alabamians’ noses in it,” he said.
In summation, I leave you with this brilliant tweet by Franchesca Ramsey:

Monday, December 11, 2017

Follow-Up on "Cat Person"

On Saturday, I shared a story published in The New Yorker: Cat Person by Kristen Roupenian. It's an excellent read I highly recommend.

Today, I discovered someone set up a Twitter account that just retweets negative reactions to the story by men. It's glorious.




And yes, before you say it, I know #NotAllMen hated this story. And I would imagine, many of these men who are responding negatively to the story are self-professed nice guys - in my estimation, probably the ones who say idiotic expressions like YOLO and "nice guys finish last" completely in earnest. But if words like "whore" and "cunt" and "bitch" are right on the tip of your tongue when a woman doesn't respond in the way you'd like, sorry, but you're not a nice guy. And if you find yourself rooting for a guy who calls a woman a whore just because she isn't interested in seeing him, I suggest you take a good long look at yourself: you're part of the problem.