Thursday, December 31, 2015

On the Brain, Learned Helplessness, and Self-Improvement

As I was out to lunch today with a friend, I heard a TV show host once again bring up the old myth that you only use 10% of your brain. In the story, she was talking about what people will look like in 10,000 years. Spoiler alert: We have bigger eyes and darker skin. She mentioned that we'd probably also be capable of using more of our brains at that point.

The origin of the "10% myth" is debated, though the most likely source is psychologist William James, who talked about how humans only use a fraction of their potential mental energy. This statement was twisted somehow into the belief that we only use a fraction of our brain - not quite the same thing.

Why does this myth prevail despite its obvious falsehood? After all, if you really sit down and think logically about all the things your brain does (beyond conscious thought), the amount of body energy the brain uses, and the significant impact of insults to the brain (such as stroke or injury), you would have to conclude that we use much more than 10% of it.

One reason this myth may prevail is potential. Or rather, the desire of people to believe they have untapped potential. Believing that you can improve in some way is incredibly motivating. On the other hand, believing that you lack any potential can result in stagnation and inaction (even when you actually can do something).

Some of the early research in the concept of learned helplessness involved putting dogs in no-win situations. The dogs were paired (yoked) with another dog who learned a task. If the learner dog behaved incorrectly, it received a shock - but so did the helpless dog. So while the learner dog received cues that would warn of an impending shock, and could change its behavior to avoid the shock, the helpless dog did not and could not. When the helpless dog was put in the learner dog's place, it did nothing to avoid the shocks. The helplessness of its prior, yoked situation carried over into the learning situation, and prevented the dog from seeing how its behavior could affect the outcome. Later research has demonstrated humans can also exhibit learned helplessness, and this concept has been used to describe the behaviors of survivors of domestic abuse.

The desire for self-improvement, an outcome of the belief in potential, drives a great deal of our behavior. For instance, I learned that one of the most popular Christmas presents this year was the FitBit, a wearable device that tracks activity. I also received a FitBit for Christmas and have so far really enjoyed having it. Not only does it tell me what I'm already doing (number of steps, hours of sleep, and so on), it gives me data that can be used for self-improvement.

With New Year's upon us, many people will probably begin a path to self-improvement through New Year's resolutions. Look for a post about that (hopefully) tomorrow!

Potentially yours,

Saturday, December 26, 2015

Scientific Journalism Follow-Up

As a follow-up to my last post, this story popped up on one of the Facebook pages I follow (Reviewer 2 Must Be Stopped): Four times when journalists read a scientific paper and reported the complete opposite.

Of course, I should point out that even this headlining is misleading (oh, the irony!) - though two of the examples are the opposite of what was found, the other two involved applying the findings to different situations and confusing perception with actual behavior. But I digress...

Saturday, December 19, 2015

Peer Review Transparency and Scientific Journalism

The folks over at Nature Communications recently announced that peer reviews will have the option of being published alongside the accepted manuscript. The author(s) will get to decide whether they are published, and reviewers will have the option to remain anonymous or have their identities revealed.
By enabling the publication of these peer review documents, we take a step towards opening up our editorial decision-making process. We hope that this will contribute to the scientific appreciation of the papers that we publish, in the same way as these results are already discussed at scientific conferences. We also hope that publication of the reviewer reports provides more credit to the work of our reviewers, as their assessment of a paper will now reach a much larger audience.
I wonder, though, what impact this decision is liable to have on the peer review process. As I've blogged before (here and here), there are certainly benefits and drawbacks to peer review. Though the editor has the final decision of whether to publish a paper, peer reviewers can provide important feedback the editor may not have considered, and can be selected for their expertise in the topic under study (expertise the editor him/herself may not have). At the same time, reviewer comments are not always helpful, accurate, or even professional in their tone and wording.

The question, then, is whether peer review transparency will change any of that. Will reviewers be on better behavior if they think their comments may be made public? Perhaps, though they can still remain anonymous. One hopes it will at least encourage them to be more clear and descriptive with their comments, taking the time to show their thought process about a certain paper.

Of course, the impact of peer review transparency wouldn't just stop there. How might these public reviews be used by others? One potential issue is when news sources cover scientific findings. When I used to teach Research Methods in Psychology, one of my assignments was to find a news story about a study, then track down the original article, and make comparisons. Spoiler alert: many of the news stories ranged from being slightly misleading to completely inaccurate in their discussions of the findings. Misunderstanding of statistics, overestimating the impact of study findings, and applying findings to completely unrelated topics were just some of the issues.

Probably part of the issue is lack of scientific literacy, which is a widespread problem. There is no training for journalists covering research findings, though one wonders, if there were, whether it would look something like this:

Source: SMBC Comics (one of my favorites)
Journalists also tend to draw upon other sources when covering research findings, such as talking to other researchers not involved with the study. It seems likely that, if transparent peer review becomes a widespread thing, we can expect to see reviewer comments in news stories about research. If the reviewer has requested to remain anonymous, there would be no way to track down the original reviewer for clarification or additional comments, or even to find out the reviewer's area of expertise. And since reviews stop once the paper has been accepted - but revisions don't necessarily stop then, instead going through the editor - comments may not be completely "up-to-date" with the contents of the paper. So there seems to be some potential for misuse of these comments.

I'm not suggesting we hide this process. In fact, my past blog posts on peer review were really about doing the opposite. I just wonder what ramifications this decision will have on publishing. It should be noted that Nature Communications is planning to assess after a year whether this undertaking was successful. But if it is successful, is this something we think we'll see more of at other journals?

Transparently yours,

Saturday, December 5, 2015

Why is Christmas Music Annoying?

We're now at the time of year where it's almost impossible to go anywhere without hearing Christmas/holiday music playing almost constantly. I'm mostly talking about pop Christmas music - multiple covers of "Santa Baby," "White Christmas," or "Rudolph the Red-Nosed Reindeer."

The problem I have with Christmas music is that there's a finite number of these songs, but a theoretically infinite number of covers. The other problem is that, because there are only a fixed number of these songs, artists try different tricks to set their versions apart from the others, tricks that can make the songs sound over-produced or quickly dated. Mind you, sometimes these tricks work and create some really interesting versions. Other times, artists write original songs - and some are actually good.

And I also want to add I certainly don’t blame the artists for releasing all these Christmas or holiday albums - I’m sure for most of them, it’s not their idea.

But if you're anything like me, you cringe just a little when you walk into a store, or flip on the radio, and hear yet another cover of [fill in the blank.] And by the second week of December, you've probably had enough.

What exactly is it about this music that can be so annoying? One possibility is what we’ll call the “familiarity breeds contempt” hypothesis. We hear these songs all the time, and know them well, even if we’ve never heard a particular cover before. They are often repetitive and infectious, like a product jingle or Katy Perry song. You may only have to hear them once, and they're in your memory forever.

Of course, while jingles may be going away (to some extent), repetition and “catchiness” are meant to serve the opposite purpose of making us like something more quickly. In fact, social psychological research suggests that the familiarity breeds contempt hypothesis often isn’t correct, and the opposite is often true. Familiarity can be comforting. Being exposed to a particular racial or ethnic group, depending on context, can increase your positive feelings toward that group (a concept known as the mere exposure effect). So this, alone, may not explain why Christmas music can be so annoying.

Another possibility is intrusion. Some radio stations and stores begin playing Christmas music quite early - I’ve seen some Christmas displays with music as early as September. At that point, we’re still coming to terms with the fact that summer is over, and accepting that we're moving into fall. Seeing/hearing elements of Christmas/holidays where they don’t belong causes annoyance, like we’re skipping directly into winter. And though winter holidays can be joyful and fun, let’s not forget what else comes with winter. In fact, my husband argues that this is why he (and others) leave Christmas decorations up past the holidays: After Christmas is over, it’s just winter. And winter blows.

Remind me to cross-stitch that onto a throw pillow sometime. :)

And because the music starts so early, combined with the above-mentioned repetition, even good Christmas songs will be played again and again and again, until you're insane.

There's also the fact that so many artists have Christmas albums, including ones where it seems completely out of character. Sure, Billy Idol may seem to be having some (tongue-in-cheek) fun with his Christmas album, but it's still a little surreal. And no matter how much Bruce Springsteen tries on "Santa Claus is Coming to Town," it's not rocking. Shouting, yes. But not rocking.

One reason I may find Christmas music particularly annoying is from working in retail. Anyone who has worked in retail knows what it’s like to be forced to listen to music one did not choose all day. My friends have heard me tell stories about the summer Titanic came out, when the player piano at JC Penney’s was programmed to play “My Heart Will Go On” every 15 minutes; my coworkers and I plotted half-jokingly about breaking in and pushing the piano down the escalators.

If you too have been traumatized by repeated listening to My Heart Will Go On, check this out. Trust me.

Fortunately, the piano had a bit more variety when it came to Christmas music. (Mind you, not a lot because, see previous comment about finite number of Christmas/holiday songs.)

As with so many things in social psychology, it's likely a combination of factors that results in the outcome. Of course, this is by no means an exhaustive list. What about you, dear readers? Do you find Christmas and/or holiday songs annoying? If so, why do you think that is? (And if you don't, what's your secret?!)

Musically yours,

Thursday, December 3, 2015

Fluency, Lie Detection, and Why Jargon is (Kind of) Like a Sports Car

A recent study from researchers at Stanford (Markowitz & Hancock) suggests that scientists who use large amounts of jargon in their manuscripts might be compensating for something. Or rather, covering for exaggerated or even fictional results. To study this, they examined papers published in the life sciences over a 40 year period, and compared the writing of retracted papers to unretracted papers (you can read a press release here or the abstract of the paper published in the Journal of Language and Social Psychology here).

I've blogged about lie detection before, and that (spoiler alert) people are really bad at it. Markowitz and Hancock used a computer to do their research, which allowed them to perform powerful analyses on their data, looking for linguistic patterns that separated retracted papers from unretracted papers. For instance, retracted papers contained, on average, 60 more "jargon-like" words than unretracted papers.

Full disclosure: I have not read the original paper, so I do not know what terms they specifically defined as jargon. While a computer can overcome the shortcomings of a person in terms of lie detection (though see blog post above for a little bit about that), jargon must be defined by a person. You see, jargon is in the eye of the beholder.

For instance - and forgive me, readers, I'm about to get purposefully jargon-y - my area of expertise is a field called psychometrics, which deals with measuring concepts in people. Those measures can be done in a variety of ways: self-administered, interview, observational, etc. We create the measure, go through multiple iterations to test and improve it, then pilot it in a group of people, and analyze the results, then fit it to a model to see if it's functioning the way a good measure should. (I'm oversimplifying here. Watch out, because I'm about to get more complicated.)

My preferred psychometric measurement model is Rasch, which is a logarithmic model that transforms ordinal scales into interval scales of measurement. Some of the assumptions of Rasch are that items are unidimensional and step difficulty thresholds progress monotonically, with thresholds of at least 1.4 logits and no more than 5.0 logits. Item point-measure correlations should be non-zero and positive and item and person OUTFIT mean-squares should be less than 2.0. A non-significant log-likelihood chi-square shows good fit between the data and the Rasch model.

Depending on your background or training, that above paragraph could be: ridiculously complicated, mildly annoying, a good review, etc. My point is that in some cases jargon is unavoidable. Sure, there is another way of saying unidimensional - it means that a measure only assesses (measures) one concept (like math ability, not say, math and reading ability) - but, at the same time, we have these terms for a reason.

Several years ago, I met my favorite author, Chuck Palahniuk at a Barnes and Noble at Old Orchard - which coincidentally was the reading that got him banned from Barnes and Nobles (I should blog about that some time). He took questions from the audience, and I asked him why he used so much medical jargon in his books. He told me he did so because it lends credibility to his writing, which seems to tell the opposite story of Markowitz and Hancock's findings above.

That being said, while jargon may not necessarily mean a person is being untruthful, it can still be used as a shield in a way. It can separate a person from unknowledgeable others he or she may deem unworthy of such information (or at least, unworthy of the time it would take to explain it). Jargon can also make something seem untrustworthy and untruthful, if it makes it more difficult to understand. We call these fluency effects, something else I've blogged about before.

So where is the happy medium here? We have technical terms for a reason, and we should use them as appropriate. But sometimes, they might not be appropriate. As I tried to demonstrate above, it depends on audience. And on that note, I leave you with this graphic from XKCD, which describes the Saturn V rocket using common language (thanks to David over at The Daily Parker for sharing!).

Simplistically yours,