Tuesday, December 4, 2012

Trivial Only Post: An Open Letter to Pinterest

Dear Pinterest,

I've been using your site for a while now and I have to say that I love it. I have long been a fan of social bookmarking, but must say that I love the idea of sharing sites and ideas visually, and the layout is very aesthetically appealing.

However, there are a few things that I believe need some additional attention and/or removal.

1. The phrase, "Pin now, read later" has got to be the dumbest, most annoying phrase ever. No, I want to read everything this page has to offer now! Come on, people. It has to go.

2. If I see one more "How to use your baby's footprint/handprint/thumbprint/whatever to make some cute animal/character/necklace/whatever", I may just scream. It was cute the first 20 times; now it's just annoying. If anyone actually attempted all of these ideas with his/her baby, the baby would constantly be covered in paint. I believe my mom did this ONCE with my brother and me; not once a holiday, not once a season, ONCE.

3. It's cool if people want to reuse the pin description from another pinner, but please at least urge them to read it over first and make sure there isn't personal information in there that wouldn't apply. Though, it does make me chuckle when a guy I know to be straight or an unmarried woman shares one of my pins I have labeled, "My husband would love this".

4. Pinning from blogs is great; I love getting to experience new blogs, and have discovered a few worth reading regularly through Pinterest. However, pins for a specific blogpost should link directly to that post, not to the main page for the blog. To do this, click on the title of the blogpost you like; you will be taken to the permanent link for that page. Then pin away. The main blog page is a feed; it shows the newest post (whatever it is). The post you like may be the newest post when you visited that blog (and therefore at the top), but by the time the pin gets to me, it's 50 posts down the list, and I have to do some digging. Sometimes, I can find it with a google search; sometimes, I just give up.

5. Not a complaint, but a suggestion - it would be cool to browse pins on two topics, rather just "Geek" or "DIY & Crafts". Because I'd die from excessive happiness if I could browse a board full of Tardis scarf patterns. No lie. Yeah, I know I could just google that, but the cool thing about pinterest is discovering things you didn't even know you should/could be looking for.

As you can see, this list is short. So it should be no problem to meet all my demands... er, requests. I'd consider it a Christmas AND birthday present. :)

Trivially yours,
~Sara


Monday, November 12, 2012

Puppies Not Politics: A Social Experiment

Its been almost a week since the election, and the political posts seem to have finally died down, replaced with talk of Christmas, Day-After-Thanksgiving sales, and the end of the year (nothing yet on the “End of the World” so many were/are expecting - but we’ll have to wait and see if more 2012 apocalypse talk surfaces). Is it just me, or do the election cycles just seem to get longer and longer, with political advertising starting earlier each time? A columnist for CNN joked that, now that the election is over, it’s time for politicians to get back to what they do best - campaign for the next election.

This year, as a response to all of the political posts I observed from friends on Facebook, I decided to try a little experiment - starting August 26, I posted one puppy picture for every post I saw.

One of the many PNP puppies; you can tell he's trustworthy, because he wears glasses.

The rules were pretty simple:
  1. One picture per post I saw. The keyword here is saw - I did not seek out posts purposefully, but only counted ones that I either saw while scrolling through my news feed or that I happened to see when visiting someone’s profile page. If Facebook lumped similar posts together - e.g., X of your friends posted about Barack Obama - I would only count however many were displayed, and did not expand posts. 
  2. The picture could contain one or more puppy. I use the term “puppy” pretty generally, to refer to any dog, regardless of age. I have to admit, though, to play on the word “puppy”, I posted a picture of recently born Mongoose puppies from the Brookfield Zoo. Pictures could also contain other animals - several contained cats/kittens, a couple contained fancy rats, and one even had a pig. 
Overall, what I got was a conservative estimate of all the political posts I saw on Facebook between August 26, 2012 and November 6, 2012. The final count: 469 posts. I decided to do the math on exactly how often I saw political posts. The Puppies Not Politics album was up for 73 days or 1752 hours or 105,120 minutes. If I remove the time I spent sleeping (rough estimate - likely an overestimate, considering my life-long battle with insomnia - of 584 hours or 35,040 minutes), I saw on average one political post every 149.4 minutes, or 0.8 per hour, or 6.4 per day. Obviously, this is averaged across the whole period and doesn’t get at the variability in frequency; some days, posts were far more abundant, such as after debates.

As I said, this is a conservative estimate, since I didn’t purposefully visit Facebook just to count political posts - it was only when I wanted to visit Facebook anyway to see what was going with my friends, post a status update, etc. Some days, I didn't have the time or energy to look. I also did not bother counting advertisements, seen either on Facebook or elsewhere. The overarching lesson here is that we see a lot of political ads, and comments - not a surprising conclusion, but still. When one’s newsfeed is packed with information about a certain thing, we often start to block that thing out and gravitate instead to what is unique in the bunch.

If we do happen to notice all those political posts, we are most likely to pay attention to posts with which we agree already, and tend to ignore the ones that run counter to our beliefs or expectations, especially when we don’t feel like engaging in systematic thought (see previous post about Facebook and "when thinking feels hard"). So liberal posts are unlikely to reach (and “convert”) a conservative, and conservative posts are unlike to reach/convert a liberal. Why, then, do we spend so much energy posting political information if it is unlikely to change anyone’s mind? There are a few explanations; this list is not mutually exclusive:
  1. The poster may be unaware that their posts are not changing anyone’s mind, and perhaps believe that, if this [insert type of person] just knew this information, they wouldn’t be a [type]. That may be true, but good luck getting that person to even notice the post, let alone click on it, let alone read it, let alone consider what they’ve read.
  2. The poster may be completely aware that their posts are not changing anyone’s mind, but instead share the link or post because they think their like-minded friends will see it and enjoy it. Facebook is one of many ways we can share “things we think are cool” - in fact, this notion of social bookmarking has spawned other services, like Pinterest - and we can share them just in case other people will think they are cool too. 
  3. Posting political information that matches with our beliefs is a way of declaring our membership in a group. Group membership is a very important source of our self-esteem. By posting information that declares, “I am a liberal”, or “I am a conservative”, etc., we get the boost in self-esteem that comes with belonging to a group. Of course, research specifically on Facebook status updating shows that comments/likes have a strong effect on self-esteem, so if I posted something and then received largely negative feedback on it, my self-esteem would probably suffer. Would it cause me to distance myself from the group? Like so many things in psychology, it depends. I may actually strengthen my ties to the group, if I believe the derision is unwarranted. 
  4. This may be a bit cynical of me, but I have to throw it out there - some people may post political information to appear intelligent and cultured. We’ve all done it, not just with political posts. I’ve certainly shared something that gives me the opportunity to, say, show my knowledge of space exploration TV show (e.g., Star Trek, Firefly, Battlestar Galactica) captains. Or posted one of the “Grammar Nazi” e-cards (only to discover there’s a typo in one’s comments on the picture). Am I calling Facebook users hypocrites? Maybe, but aren’t we all? 
These are just a few of the potential reasons off the top of my head. What do you think, readers? What are some other reasons for all the political posting?

Thoughtfully yours,
~Sara

Tuesday, October 16, 2012

On Top of the World: The Importance of Science and Innovation

By now, you’d have to be living under a rock (one without WiFi, no less) to not know that on October 14, 2012, a man named Felix Baumgartner traveled by balloon to the edge of the earth’s atmosphere, then stepped out of his capsule and jumped, free-falling 128,097 feet and reaching speeds of Mach 1.24. In addition to breaking a number of other world records, he became the first person to travel at super-sonic speeds without the aid of a jet or space shuttle.

Let’s pause and just think about how cool all of that is.

Seriously, stop and think about this. If you need some help, watch this video.

We live in an age where this kind of thing is possible. Think about how far we have come as a species. Centuries ago, diseases, like the bubonic plague, that were a death sentence can be treated with antibiotics. Which is why it disheartens me when people do not see the value in scientific achievement.

We have become so comfortable in our existence, thanks to these scientific achievements, that some spend all the time saved us by technology crusading against these life-saving achievements, like modern medicine and vaccinations that have increased our life expectancy by decades and eradicated once-common diseases, computers and the Internet that have made information accessible to billions, and the trains, planes, and automobiles that have allowed us to explore our whole world and not just the little piece in which we happened to be born.

Yes, I recognize that I’m speaking about the “first world”, and that there are many parts of the world these innovations have sadly not touched.

And that’s something that needs to change. Because the quality of life to which we have become accustomed is brought to us by scientific achievement and encouragement of innovation. Of questioning why we have always done things a certain way and trying to introduce a new way of approaching a problem.

That’s right, today’s quality of life is brought to you by the letters S, T, E, and M.

Without scientific achievement, our quality of life will stagnate. Or worse, backslide. This is why, whoever wins in November needs to be supportive of science, technology, engineering, and mathematics. They need to support, and incentivize, innovation, through basic research in all of these fields. This means funding this research and/or making it easier for private investors (such as Red Bull, which funded Baumgartner’s jump) to fund this research.

And they need to support and encourage quality scientific education, because even the most amazing scientists will not live forever, and their ideas won’t live forever either if they are not passed on to the next generation of thinkers.

As a government employee, I can’t comment on political campaigns and therefore, can’t say who I think would be more supportive of science and innovation (just letting you know, in case you share your thoughts in comments – whether I agree with you or not, I really can’t say). But the issue of science and its importance is non-partisan. It is something we should all care about, even if adding two numbers together fills you with a dread similar to playing Twister with 50 snakes.

Because believe me, they are in it to win it. 

I read an interesting post recently about the phrase “I’m entitled to my opinion” that you should definitely read (here). I won’t reiterate what it said, but will add that you should at least be aware that the reason you or I are “entitled to our opinions” is because we live in a society that has become so comfortable due to technology that we have time to think of our ridiculous, ignorant opinions (or write our ridiculous blog posts) – rather than spending our time trying to avoid the bubonic plague.

~Thoughtfully (and scientifically) yours,
Sara

Wednesday, October 10, 2012

On Quality Chasms, Interactivity, and Digital Textbooks

As has become increasingly the case recently, the time between this and my previous blog post is much longer than I would have liked. Many people argue that one should only write when one feels inspired, but I’m not one of those people; a writer writes, and I find that if I don’t write regularly, I get out of practice. Ray Bradbury offered advice to young writers, including that they should write a short story every week – at the end of the year, you’ll have 52 short stories and the odds that they will all be awful are pretty slim. Perhaps I should follow his advice and write a blog post every week?

But what I’d really like to write about today is a response to an editorial I read about digital textbooks. Last week, Arne Duncan stated that paper textbooks should become a thing of the past, and we should embrace digital textbooks. Justin Hollander wrote an editorial response to this declaration, arguing for the benefits of paper in education. You can read his editorial here.

It really is a well-written piece and offers may good arguments. At the same time, what Mr Hollander and many others fail to recognize is the power of technology to go beyond merely reproducing the written word on the digital screen.

At my job as a health services researcher, we are spending a lot of our time exploring something called “patient-centered care”. Though this concept has been around in psychotherapy for decades, it came into the forefront of health care and medicine in a report released by the Institute of Medicine.

That report, titled “Crossing the Quality Chasm”, was the second in a series of reports addressing quality issues in American health care, stating, “Between the health care we have and the care we could have lies not just a gap, but a chasm” (p. 1). Specifically, the issue is with the design of the system, which is not aligned with the needs of the current population – a population that has longer life expectancy and more ongoing, chronic conditions requiring a different management approach, is more mobile (so people may not see the same doctor their entire lives) and simply is more abundant (making it difficult for doctors to really know their patients’ situations and needs). The report provided recommendations for safe and effective care, specifically discussing patient-centered care as “encompass[ing] qualities of compassion, empathy, and responsiveness to the needs, values, and expressed preferences of the individual patient” (p. 48).

This report also discusses increased use of technology, to improve access to information, education, and care, so long as the technology is aligned with a patient’s individual needs, values, and preferences. That is, the technology should be customizable and tailored to the patient.

Perhaps you’re asking (or have been asking), “What does any of this have to do with digital textbooks?” Hang in there, kids; we’re almost there.

Patient-centered technology is not about simply digitizing information once recorded only on paper; rather, it is about changing healthcare delivery, efficiency, and quality, and creating a system that is truly patient-centered. People who merely take information from pamphlets and booklets and slap them onto a web-page are entirely missing the point. Where technology really shines is in its ability to shift in response to the user. The concept here is “interactivity”.

Let’s say I’m a physician who wants to teach my patients with diabetes about managing their blood sugar. In the past, I’d probably have them do some in-person training, perhaps with a nurse educator, on checking their blood sugar, giving themselves insulin, and all the other self-care that people with diabetes must do, and I’d probably send them home with some pamphlets. Of course, they would continue seeing me regularly, but so much of what people with diabetes must do involves self-care; they really must become masters of managing the condition.

With technology, I’d still do some in-person training, but I could also have them receive information from the pamphlets through a website or tablet app, where they can select what information to view, access embedded videos, and perhaps even take a quiz to assess their understanding and identify gaps in knowledge. I could even design this program so that, based on quiz results, they are given access to additional reading and videos to specifically address those gaps.

So let’s bring this back to education. Say I’m a student taking a statistics class. I read the section on measures of central tendency (mean, median, and mode), then complete a quiz. If my quiz results show that I’m having some difficulty with mode, I could be taken to additional sections that focus more heavily on that concept, provide more examples, or even take a different approach to presenting the information.

I can understand the hesitance to completely abandon paper. Paper should still have an important place in reading and education. There is something to be said for the ability to touch, to hold something in your hand, to hear the binding crack, to smell the ink on the pages. And when the choice is between reading a book on paper or reading a book on an e-reader, where the logistics of navigating pages is basically the same, it seems more a matter of personal preference than superiority of one medium over the other. Sure, the ability to search and carry more books without adding weight to one’s bag is nice, but may not be enough for the increased (at least initial) costs of converting from paper to electronic. 

But if we can capitalize on the technology at hand to supplement and tailor material, to really allow students to grasp core concepts so that we may expand upon them in the classroom and move toward mastery of the subject, then Mr. Duncan, I couldn’t agree more.

Thoughtfully yours,
~Sara

 Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy Press.

Sunday, June 10, 2012

CDC, You've Created a Monster (Pun Fully Intended)

At work the other day, I heard a radio ad for Illinois's Click-It-Or-Ticket campaign, in which they discuss the importance of wearing seatbelts in case of a zombie attack (hiding and waiting for unsuspecting drivers before jumping out at their cars - yeah, seriously). People are fascinated with zombies recently and public health organizations are no exception. For instance, CDC released a disaster preparedness campaign last year, in which they discuss the preparations individuals and families should make to prepare for a zombie uprising. Though the purpose of this campaign was to get people thinking about disaster preparedness for less supernatural reasons (floods, tornadoes, etc.), it also was meant to get people's attention - and it worked, though the response on the internet was perhaps more "WTF, CDC?" than "Wow, I never thought about disaster preparedness like that."

Besides, wouldn't wearing a seatbelt be a bad idea if zombies suddenly jump in front of your car and cause you to wreck? I mean, you want to get out of there quickly after wrecking, right? If you're surrounded by a zombie hoard, you're probably not moving very quickly anyway. And now that I think about it, if they wait to jump in front of your car so they can make you wreck and get some food (sounds suspiciously like fishing), wouldn't that involves things like patience, thinking power…? Not exactly what zombies are known for.

But I digress.

Public health organizations are doing whatever they can to get people's attention these days. And the CDC campaign, although weird and random, got attention and perhaps that's all it needed to do. After all, mass media campaigns for public health are generally really expensive to implement but have small effects. Really, any campaign intended to simply educate does little to actually change behavior - in fact, it does little to even change attitudes about a behavior, and even if it does change attitudes, a lot of psychological research demonstrates that attitudes are poor predictors of behavior. That's right, people can hold very strong beliefs and behave in ways completely counter to them. It drives me nuts when people say that psychology research just confirms common sense, but seriously - does this actually surprise you?

How about now?
By using something currently popular, like zombies, they increase the chance that mainstream media will pick up the story and help pass on the message, which could exponentially increase the campaign's reach. But does capitalizing on the current trend actually weaken the message? Does it mean that, once zombies are no longer "cool", the campaign will lose its effect and possibly even have the opposite effect in the long run? I tried to find some research on this - it's what I do.

There's definitely been research on what makes public health campaigns effective, but the focus is more on things like tone (e.g., fear appeals) than content. One exception is a review by Randolph and Viswanath, which examined studies of public health campaigns to determine which factors most clearly related to effectiveness. They identified as the most important factor: "successful manipulation of the information environment by campaign sponsors to ensure sufficient exposure of the audience to the campaigns messages and themes (influencing the information environment and maximizing exposure)." Okay, so perhaps CDC's (and even Illinois's Click-it-Or-Ticket) idea was a good one - do something that gets attention and is picked up by other media outlets. Fine fine, moving on in the list… 2) use social marketing to develop messages appropriate to a specific audience and get the message to that audience (meaning there may need to be more than one message for different demographic groups), 3) create an environment in which the target audience can carry out the recommended change, 4) use sound health behavior theory, and 5) conduct process as well as outcome evaluations to understand the campaign's effect.

But there is a bigger question here. Could CDC have legitimized some people's beliefs that a zombie apocalypse is upon us?

Four or five years ago, I discovered a web group devoted to surviving zombie attacks. As a long-time horror movie fan (see previous blog posts - part 1 and part 2), I figured this was a group run by fellow horror movie fans who would discuss different zombie movies… sadly no. This was a group of people who genuinely believe a zombie apocalypse is imminent, and spend their time 1) posting news stories supposedly providing evidence that zombies are among us, and 2) devising plans to survive attacks. I received quite a few, "This is not what our group is about" messages in response to some of my posts about great movies, or even when I said that my survival approach would depend on if these were slow-moving, Night of the Living Dead zombies, or fast-moving, 28 Days Later zombies. I can imagine that this group must be pretty active given the media's recent focus on particularly heinous attacks involving cannibalism (and I choose each of those words very carefully - no, these recent events are not, in my opinion, evidence of zombies, but rather evidence that, "If it bleeds, it leads" is still the approach taken by mainstream media, and that, given our recent focus on zombies in popular culture, the media IS biased in what stories it chooses to cover).

I come from a behaviorist background - my undergraduate psychology department head was a behaviorist and for the better part of my undergraduate career (and a not-insignificant chunk of my graduate career), I was like a mini-Skinner, arguing that everything could be explained by contingencies of reinforcement. Though I lean more toward cognitive psychology these days, I still remember many of the things I was taught in my behavior classes and believe these topics are still very relevant for understanding human behavior.

One important thing to keep in mind is unintended consequences. When trying to reinforce a certain behavior, you have to watch for other behaviors you may be inadvertently rewarding (or punishing). For example, I had a professor in college who hated it when people showed up late for class. When people would come in late, he would stop class and spend a minute or two interrogating the person about why they are late, lecture them about why their behavior is unacceptable, etc., etc. He would also take points off for tardiness - the same number of points he would take off for an absence. Basically, people who showed up late once never did it again - because after that, if they were going to be late for class, they just skipped. I'm sure if this professor were aware that his punishment, though decreasing tardiness, was actually increasing absences, he would have changed his approach. But I doubt he ever examined his tardiness and absence data to look for this pattern.

The same goes for interventions intended to change human behavior - you want to examine behaviors that might have been affected by your campaign, whether those were behaviors you were hoping to change or not. It is unclear what effect the CDC campaign had on things like belief in zombies, but if I were performing an evaluation of this campaign, that would be something important to assess.

In fact, CDC spoke to a reporter at Huffington Post about the recent events, and said that CDC is aware of no virus or toxin that could cause the reanimation of dead tissue. So perhaps they recognize that selecting zombies for their campaign was a bad idea.

Still, the blame cannot be placed entirely on CDC. As I said earlier, zombies were already getting lots of attention, and this is probably why CDC selected that topic to frame their disaster preparedness campaign. What do you think? Did CDC add an air of legitimacy to the supposed "zombie apocalypse"? Or just capitalize on the zeitgeist?

Thoughtfully yours,
~Sara

Thursday, June 7, 2012

Trivial Only Post - On the Insanity of FSoG

I haven't posted in forever. I'm working on a new post, but it's gotten rather deep, and is taking a while to put together. So I've decided to try something new. In addition to my deeply trivial posts, I occasionally think things that are best described as trivial only. Hence, the trivial only post. These will be short, to the point, likely very snarky (who am I kidding? - that describes all my posts) and on rather ridiculous topics. Let's see how this goes.

The more I learn about the author of Fifty Shades of Grey, the more I think she's actually completely out of her mind, and rather than recognizing and treating her mental illness, we're heralding her for writing such compelling drama. The most recent thing I heard about Fifty Shades of Grey (hereafter referred to as FSoG, pronounced Fih-Sog, if you don't mind) that makes me think that?: Spotify was advertising the FSoG playlist, which contains tracks "inspired by the book everyone is reading."

First, not everyone is reading the book, but whatever. Second, last time I checked, "I'm on Fire" by Bruce Springsteen, The Flower Duet from Lakme, and "Toxic" by Britney Spears were written long before this book. I think someone needs to check their dictionary and make sure they're really clear on the definition of "inspired". Honestly, we should explain how the space-time continuum works, but that might be asking too much.

After all, what can you expect from a book that was "inspired" by Twilight?

Trivially yours,
~Sara

Sunday, March 4, 2012

And Now for Something Completely Different: The Psychology of Parody

I was a little late onto the Adele bandwagon. She had already released her album 19 and “Rolling in the Deep” was already a single when I finally gave her a shot - okay, I’ll be honest, it took a few listens to 21 before I decided to buy, and I still have some serious qualms about her singing technique; I mean, at only 23 years old, she’s already shredded her voice to the point that she needed surgery to save it. But then, I can’t help but sing along - loudly - anytime her music comes on the radio, and I definitely enjoyed the video for “Rolling in the Deep”, which - though it had some symbols I didn’t completely understand - was much more sensical than other videos I watched around that time.

So you might be surprised to hear that when Key of Awesome, a group who creates parody versions of music videos, made a parody of “Rolling in the Deep”, I loved it. In fact, I enjoy most of Key of Awesome’s videos, even (and perhaps especially) when they make fun of a song I enjoy.

This seems counterintuitive. Why would I enjoy seeing music I love being made fun of? But it’s something I’ve long been aware of about myself and others, and have wondered about occasionally. Today, I finally sat down and began to explore what it is about parody we find so funny.

You may not be surprised to know that humor is very important to human beings. Being able to see the humor in situations has mood-enhancing effects (Strick, Holland, van Baaren, & van Knippenberg, 2009) and is beneficial to our long-term well-being: Martin and colleagues (see Martin’s book for more information on this research) created a questionnaire to assess individuals’ humor styles: Self-Enhancing (being able to comfort oneself with humor), Affiliative (using humor to build relationships with others), Aggressive (sarcasm or teasing others), and Self-Defeating (using humor at one’s own expense); these styles are related to many measures of psychological well-being, such as satisfaction with life, self-esteem, optimism, and mood. Having high scores for Self-Enhancing and/or Affiliative humor is associated with greater well-being, and having high scores for Aggressive and/or Self-Defeating humor is associated with lower well-being.

Further, Galloway (2010) examined humor styles scores and found that there are four distinct groups: people high on all four styles of humor, people low on all four styles of humor, people high on self-enhancing and affiliative humor and low on aggressive and self-defeating humor, and people high on aggressive and self-defeating humor and low on self-enhancing and affiliative humor. Other researchers have attempted to take the field a step farther, examining what it is about certain situations or stimuli that make them funny. And it seems what it comes down to is setting things up so that perceivers expect a certain outcome… and then giving them something completely different. Strick and colleagues (2009) explain:

“A typical joke contains a set up that causes perceivers to make a prediction about the likely outcome. The punch line violates these expectations, and perceivers look for a cognitive rule that makes the punch line follow from the set up. When this cognitive rule is found, the incongruity is resolved and the joke is perceived as funny.” (p. 575).

In fact, research has shown that when we are unable to make sense of a joke - find a cognitive rule that “makes the punch line follow from the set up” - we find the joke to be less humorous. Which can be seen in practice by anyone who ever told a joke, only to be greeted by silence followed by “I don’t get it”.

Sigmund Freud called this incongruity experiencing the “uncanny” - encountering the unfamiliar in familiar situations. Absurdism is considered to be one form of uncanny-inducing stimuli. Freud argued that uncanniness was a thrilling state of arousal, though others have argued that it can be quite unpleasant - regardless of whether this state of arousal is enjoyable or not, we deal with it as we deal with most states of arousal: engaging in behaviors to make the state of arousal go away or end.

One way we can end or get rid of uncanniness is by perceiving the stimuli to be a joke and responding to it as we do to jokes (e.g., laughter if we find it funny, eye rolls if it’s not funny, etc.). If we don’t realize something is a joke (i.e., we don’t get it), we have to find other ways of dealing with uncanniness, such as by reaffirming our worldview (in fact, reaffirming our worldview is a common way we deal with unpleasant states of arousal - see Terror Management Theory as another example).

Proulx, Heine, and Voh (2010) performed a study on uncanniness with absurdist art, including Monty Python (seriously, sign me up for the line of research using Monty Python as the stimulus). In study 2 of their article, 2 groups of participants read a summary of Monty Python’s Biggles: Pioneer Air Fighter, which was presented as either a joke or an adventure story; a 3rd group read a standard joke. Afterward, they read an unrelated court case and set bail; this was their opportunity to affirm their worldview. Participants set significantly higher bail in the “adventure story” condition than the other two conditions. The authors also found that among participants reading Biggles presented as an adventure story, those who found the story funnier (that is, figured out it was a joke even though it wasn’t presented as such) set lower bond; this effect was not observed in the other two groups. The effect was also not explained by mood, so even though reading something humorous makes people happy (if they get it), it’s not their happiness that explains the bail amount selected.

But the secret to understanding why some people appreciate parody while others do not probably lies in the part of the body that deals with arousal on a fairly regular basis.

Of course. Why? What did you think I was going to say?

Neuroscientists have explored what regions of the brain are activated when we encounter humor. In research on adults, they’ve found that humor results in activation in the part of the brain where the temporal (side), occipital (back), and parietal (top) lobes meet, known as the TOP junction. This part of the brain is used to resolve incongruencies - such as instances where unfamiliar elements are juxtaposed with the familiar (doesn’t this definition bear an uncanny resemblance to, well, the definition of uncanniness?). They also see activation in the mesolimbic system, responsible for processing of rewards (see previous blog post on reward pathways and addiction). That’s right - humor is rewarding, too.

So laugh it up, fuzzball.

~Thoughtfully yours,
Sara

For more laughs, also see Bad Lip Reading and Nice Peter.

Galloway, G. (2010). Individual differences in personal humor styles: Identification of prominent patterns and their associates. Personality and Individual Differences, 48, 563-567.
Proulx, T., Heine, S.J., & Vohs, K.D. (2010). When is the unfamiliar the uncanny? Meaning affirmation after exposure to absurdist literature, humor, and art. Personality and Social Psychology Bulletin, 36, 817-829.
Strick, M., Holland, R.W., van Baaren, R.B., & van Knippenberg, A. (2009). Finding comfort in a joke: Consolatory effects of humor through cognitive distraction. Emotion, 9, 574-578.

Saturday, February 4, 2012

Lies Scientific American Told Me: A Response to Michael Shermer's "Lies We Tell Ourselves"

First, let me say that I generally really enjoy Scientific American articles. They introduce me to new topics, new perspectives, and new writers. They make me think (and I suppose this article I'm about to rip apart also made me think).

I was really disappointed, however, with a recent article by Michael Shermer about Robert Trivers's book, The Folly of Fools, which outlines why human beings evolved the ability to not only be deceptive but to detect deception in others. This theory draws upon evolutionary theory as well as game theory - a theory about how people make strategic decisions when making exchanges with others (and how we use information on others' strategies to influence our own strategy).

The problems with Shermer's article have been well-outlined by the commenters. I urge you to read them, because they are generally articulate and thoughtful - I didn't come across any that appeared to be people "trolling" (see previous blog post for more on Internet trolls). Instead of repeating what these commenters already said so eloquently, I want to focus instead on this issue of lie detection.

Trivers argues that when people are being deceptive, they have three big "tells": nervousness, control (to hide their feelings of nervousness), and cognitive load (which results from having to construct a whole new reality, and has several noticeable effects on behavior, such as fewer hand gestures and speaking in higher-pitches).

People have believed in these tells for a long time (and the media continues to propagate these and similar beliefs). This is why people administering the polygraph and other "lie-detecting instruments" include control questions - questions that people generally answer truthfully - to provide a baseline to which we can compare responses to the questions of interest. If, while lying, they do things like pause longer or speak in higher tones, then voice stress analysis units should be effective at detecting lies, right? And yet, in research, VSA is not found to be effective at differentiating liars from those telling the truth (read a brief summary here), prompting one judge to refer to a particular VSA device as "little better than a sewing machine".

Also doubles as a paper-weight
So perhaps this means we shouldn't depend on computers and other electronic devices. After all, according to Trivers, human have evolved many of these capabilities, and can examine really complex things, like context, that even the smartest computer can't handle. We should instead use people to detect lies. And yet, people are generally very poor at detecting lies on their own (an unpublished meta-analysis mentioned in this article found overall detection deception of 53%, little better than chance). Whenever I teach Psychology & Law, I also do a deception detection activity, where students receive a card telling them to lie or tell the truth to a partner who asks them "What did you do yesterday?" Except for one class that was very good at detecting lies (something like 70%), every class scores right around 50% - chance.

So, as the article linked just above discusses, people have responded by introducing training, especially for people who detect deception as part of their job: police officers, airport security, and so on. That should help, right? A study by Kassin and Fong (1999 - full article here) found that training actually reduced accuracy in detecting deception, but increased confidence. Those who received training were more likely to be wrong than those who did not receive any training, but were more confident that they were right.

This cavalier belief that we can detect deception, and can only improve with greater training, has led to many programs and initiatives meant to bring better deception detectors to places like the airport. And that feeling of confidence in response to training only reinforces the belief that training is a good idea and should be carried out for more personnel. In response to the terrorist attacks of 9/11, the United States has implemented different programs that provide training to airport security agents to detect when people are hiding something, whether that be drugs, money, or explosives.

One program utilizes micro-expressions. These flickers (lasting about 1/25th of a second) of emotion appear on our faces before we have conscious control over our outward expressions of emotion. These micro-expressions have been well-studied by Dr. Paul Ekman, an expert in human emotion. Some of Dr. Ekman's first research on the issue established the 6 universal human emotions - happiness, sadness, anger, fear, surprise, and disgust - that are present across cultures. Based on his research on human emotion generally and micro-expressions specifically, he has developed short training (delivered in as little as 30 minutes).

In as little as 30 minutes, we've convinced TSA agents that, while carrying out various other parts of their jobs, they can catch terrorists.

Dr. Ekman is a brilliant man who has been studying human emotion since the 1950s. This man can probably read any emotional expression, no matter how short, because he knows so much about it. Expecting TSA employees receiving brief training to be able to use these micro-expressions to detect deception at even half the level of someone like Paul Ekman is, I think, expecting a lot.

Do I think these micro-expressions exist and provide cues about lying? Absolutely. But do I think that one can go through brief training, where the micro-expressions are either pre-recorded or, worse yet, acted out, and then walk into the real world, without control or scripts (or feedback on most of the decisions they make about passengers' deception) and use this tactic effectively? Absolutely not.

Remember, in general (unless the airport is performing extra searches on people completely at random) the only people who are stopped and inspected are the ones the TSA agent suspects of being deceptive. This leaves a bunch of people not believed to be deceptive walking right through security, and no feedback on whether they actually were hiding something.

I might be more comfortable with a program that simply clones multiple copies of Paul Ekman to run around our major airports catching liars. Of course, then they'd all want their own TV show.

Thoughtfully yours,
~Sara

Sunday, January 15, 2012

False Research Findings, Truth, and Dirty Jokes

I recently came across an article in PLoS Medicine (Ioannidis, 2005), which concluded that most published research is incorrect. They went on to explain many of the factors that affect whether a study came to the correct conclusion. Though this article was published about seven years ago, it’s been circulating once again, because the points the article makes are still important and relevant. And given some recent, high profile instances of fabricated research findings (see previous blog post), it’s important to keep in mind that simply because a particular finding is not replicable doesn’t automatically mean the researcher(s) made up stuff. There are many logical reasons for why a researcher may find something, through no fault of the researchers or the study design, that simply isn’t true.

I first want to offer the caveat that Ioannidis examined quantitative research. The issues affecting the accuracy of qualitative research are different (I won’t say non-existent, because qualitative research is definitely not infallible, just that these particular results really only apply to studies done where the collected data are numerical).

The underlying concept they’re trying to get at here is validity, defined as truth or, specifically whether the conclusions drawn from a study are a correct, accurate reflection of the topic under study. Though we can never really know the truth, we can get at it through many different types of research, performed in different settings, with different people, etc. Validity is a big concept that encompasses many different types of truth. In research, we think of four types of validity: internal, external, construct, and statistical conclusion.

Most people can understand the concept of validity, but occasionally struggle with the four types. Therefore, I’m going to use one hypothesis to show the various different kinds of validity. This hypothesis comes from a conversation I was having with a friend one day. I told a recently heard, and quite dirty, joke, and afterward, said I should probably keep my telling of dirty jokes to a minimum. To which my friend replied, “You can never have too many dirty jokes.” And of course, being a scientist, I said, “I think we should empirically test that hypothesis.” Little did my friend know, I was only half joking.

So let’s say I wanted to design a study to test this hypothesis. First, I’d need to alter the hypothesis somewhat, unless I’m willing to allow an infinite number of dirty jokes (because I doubt you could actually set up a study to test a “never” contingency), but I’d want to get at the underlying topic of number of allowable dirty jokes. I would have to set up a situation where I could determine at what point someone hearing the dirty jokes requests that they stop. I’d have to pick a certain setting to conduct this study, and have at least two people there (perhaps more): one to tell the dirty jokes, and one to listen and determine when the jokes should stop. I’d have to make sure the joke-teller has enough dirty jokes in his/her repertoire so that the experiment could go on as long as needed - so that the only person calling a halt to the jokes is the listener (or listeners) - but would probably set up a time or number-of-jokes limit so that the participants (and the researchers, for that matter) aren't stuck there forever. I might also want to add another condition, where the joke-teller tells clean jokes; it’s possible that people just get fatigued listening to jokes in general, so we’d want to determine if there’s something different about dirty jokes that may increase or decrease the number a person is willing to hear before saying enough.

All of the above would help us to establish strong internal validity, certainty that our independent variable (the jokes) actually caused our dependent variable (the request to stop telling jokes). If I didn’t have the additional, clean-joke condition, I could still test at what point the person hearing dirty jokes asks they stop, but I’d be less certain it was the dirty jokes causing the request, rather than jokes in general (or just being forced to listen to one person talk for a long time, another potential comparison condition).

Okay, so imagine that I did this study with people hearing dirty jokes from someone (one-on-one, so there was only one joke-teller and one joke-hearer) and other people hearing clean jokes. Let’s say they were randomly assigned to hear either clean or dirty jokes, so that we could expect any additional characteristics affecting our outcome (e.g., poor sense of humor, intolerance for sexual references, etc.) would be evenly divided across groups. And let’s say I found that, on average, people are willing to hear 5 dirty jokes before asking the joke-teller stop (compared to, say, 10 clean jokes).

Does this mean, if I’m at a family reunion, with my rather large family, I know I can probably get away with 5 dirty jokes before someone says, “Okay, Sara, that’s enough. You mean to tell us we helped you through grad school so you could become a female Patton Oswalt?”? Not necessarily. Remember, I did the study in a one-on-one situation. My results may not generalize to group situations. This refers to the notion of external validity, the degree to which the findings of a study can generalize to other people or situations. It doesn’t mean my results are wrong if I find that at my family gathering, I can tell 20 jokes before someone says, “Okay, that’s probably enough.”. It just may mean that groups are different than individual people.

I’d want to do another study using groups instead of individuals, to examine how the effect may differ. I may find that certain groups (e.g., my family) are more tolerant of dirty jokes and allow a greater number to be told than other groups (e.g., my fellow congregants at Sunday mass), and may even find that the same people can be more or less tolerant of dirty jokes depending on our current situation (such as telling jokes to fellow congregants while at church versus telling the same people jokes while we’re out at the bar).

One thing that is important for any of the studies discussed above is how I’m defining my variables. What exactly do I mean by “dirty jokes”? Do I mean jokes with foul language? Sexual content? Something else? Once again, if I do a study and find that people are quite tolerant of dirty jokes and allow a dozen to be told before saying “enough”, and another researcher finds the number to be much lower (say three), it doesn’t necessarily mean one of us did a poor study. Even if we both did the study in the same situation, with the same types of people, we might find different results if we defined “dirty jokes” differently. And while we could probably think of multiple good definitions of “dirty joke”, some definitions are better than others. If, in my study, I defined “dirty jokes” as jokes about dirt and mud, then that could be a big reason for my different results; the way I defined the construct “dirty joke” was not very accurate, so the construct validity is low.

If this is your idea of a "dirty joke", you should check out Sesame Street's True Mud sketch.
Finally, statistical conclusion validity refers to whether I used the statistics to analyze my data correctly. Probably most people are with me until this point in the validity lesson, because when I mention statistics, I see eyes start to glaze over. To put this in the most basic way, math has rules (in statistics, we call them assumptions, but they amount to the same thing). If we don’t follow those rules, we get the wrong answer, like if we start adding, subtracting, and multiplying a long string of numbers without following the proper order of operations (remember PEMDAS? - parentheses, exponents, multiplication, division, addition, subtraction; you have deal with numbers in parentheses before numbers outside, multiply numbers before you can divide, etc.). If a number has a decimal point in front of it, we can’t ignore it and pretend it’s a whole number, or if we’re told to add a negative number to a value, we can’t ignore the negative sign.  [And if you want to try to make the argument that negative numbers don't actually exist, so why should you have to learn to do math with them?, obviously you've never had student loans.]

The same thing can be said about statistics; if I ignore the rules on when I can use a specific statistical formula and use it anyway, my results could be incorrect. For example, one assumption of many tests is that the dependent variable (the outcome) is normally distributed (i.e., the “bell curve” - this is why, in any stats class, the normal distribution is one of the first things you learn; it’s the underlying assumption of most of the tests you learn in those classes). If we want to use one of those tests, and our dependent variable is skewed, we may draw the wrong conclusion from our results.

Of course, even if you do a study in the best, most controlled, most accurate way possible, you might still draw the wrong conclusion. Sometimes weird stuff happens: even with random assignment, we might have some weird fluke where all the people with good senses of humor end up in one group. Or I might do the study on my family on a really good day, when they’re willing to hear way more dirty jokes than they would on any other day, meaning my results are not just limited to my particular family, but to my family on a very special kind of day. This is why we keep studying a topic, even if many others have already studied it. And we can’t just limit ourselves to one type of research, such as lab studies with lots of control and random assignment to groups. If you study a topic in many different ways (lab studies, observational studies, interviews) and find generally the same results in all of them, we can be even more certain our conclusions are accurate, and that we’ve gotten to close to finding that elusive concept of truth. And recognize that things can go wrong. It’s not the end of the world; just keep studying and have a good sense of humor.

Thoughtfully yours,
~Sara

Thursday, January 12, 2012

Dr. Pepper Ten: The Product is “Not for Women”, But the Commercials Are

No doubt, you’ve seen and heard about Dr. Pepper’s new soda, Dr. Pepper Ten, a low-calorie beverage that, unlike diet sodas, uses real sugar. And you’ve probably heard their commercials that feature manly men talking about action movies, duct tape, and bacon.

Mmmm, bacon…

Sorry, where was I? Oh, yes, the commercials. The purpose of the testosterone-infused advertising is in response to research showing that men are not interested in drinking diet sodas because they are perceived as being “girly” (find out more here). This soda was also developed to be a low-calorie option that didn’t taste like diet soda, because many people have issues with the taste of artificially sweetened beverages.

Word. There are few flavors in this world I dislike as much as artificial sweetener.

So in order to cater to men who want a diet beverage they can feel comfortable drinking with the guys, Dr. Pepper created Ten and created ads (likely spending millions of dollars on said ad campaign) that focus on men.

But they don't.

Listen to the ads. They’re always addressing women, without any statements toward men. Rather than saying, “Hey guys, want a beverage that recognizes your desire to be calorie conscious without all the estrogen? Try Dr. Pepper Ten.”, they start out the ads with, “Ladies…” and go on to explain to women why this beverage isn’t for them.

“Hey, ladies! This soda? Not for you…
 Wait, where are you going? I wasn’t finished explaining why this soda isn’t for you.”

Perhaps the aim is to remind guys of their days building clubhouses with their friends and putting up the “No girls allowed” sign (rather than a “Boys only” sign, which would have made a lot more sense). It’s also possible that the goal is to get women interested in trying the soda, because of the way people respond to being told not to do something. Specifically, they may be trying to elicit psychological reactance.

Humans are motivated to believe they have free will, as in control over their actions (whether you actually have free will – well, that’s something philosophers have been arguing about forever, so we won’t even go there right now). When someone tells you not to do something, your free will is threatened, and so you will behave in a way to reaffirm your sense of free will; the best way to do that is to do the thing you were just told not to do.

Parents are very familiar with this concept.

And I’ll admit, one thing that really drives me nuts is being told I am not allowed to do something or am even incapable of doing something (especially things that are learned) by virtue of my genitalia. Because apparently, the ability to change my oil, troubleshoot my computer, and hammer a nail are tied to the Y chromosome. “No point in teaching a woman to do any of those things. She’d never be able to learn it. So I’m going to avoid teaching her those things just to prove my point.” <sarcasm>Wow, your logic is infallible.</sarcasm… for now>

There’s a reason that social scientists insist on using the term “gender” in research. It’s not that we have an aversion to the word “sex”; it’s that we recognize “sex” is a biological term, whereas “gender” is a social term. Yes, because I am a woman, I have been shaped to behave in certain ways and believe certain things (and this perspective is also why I’m writing this blog entry and focusing on these issues). At the same time, I have my own unique set of traits, abilities, beliefs, and attitudes that were shaped by a variety of factors, not just the fact that I am a woman. The same is true for everyone; we were all shaped to be the way we are by our unique experiences, and throwing us all into one big category doesn’t make us all the same. Just like calling a calorie “manly” doesn’t make it so.

My point is that, perhaps they’re posting the “No Girls Allowed” sign while secretly hoping the girls will come around. And if that were my only reaction to Dr. Pepper Ten, I might just say, “What the heck, I’ll try it.”

After all, torque is a rather fascinating word.

There’s more to it than that, of course. Not only is Dr. Pepper Ten dragging out every gender stereotype possible, which has some documented effects on women’s performance in certain domains (see previous post), this issue of diet soda and gender has many more ramifications.

One of the reasons diet soda is so popular with women is because of our society’s focus on women’s bodies and the stigma associated with female overweight and obesity.

What stigma are men concerned about? Apparently, being seen drinking diet soda in public.

Forgive me if I’m not feeling too sympathetic, guys.

In all seriousness, I know that body image is also a serious concern for men, and have known more than one man who developed an eating disorder in response to pressures to look a certain way. Even so, women are constantly bombarded with messages to be thin, not just through the media, but in the fashion world overall. Clothing is often designed with thinner women in mind, and simply sized up to fit larger women; of course, the styles that look good on thinner women often differ from styles that look good on larger women, so this “sizing up” doesn’t necessarily allow women in larger sizes to look, and more importantly feel, good. And the messages come from our peers, too, even other women, who are often the worst offenders in making women feel bad about how they look.

I’d like to take a moment to thank those people who go out of their way to make me feel fat. At the very least, you’ve proven to me that being thin doesn’t make you happy or a good person.

And honestly, research has shown that no one really likes the word “diet”. In fact, some weight management programs are exploring new titles, like “wellness-focused”, and finding that people still have positive weight loss outcomes without needing to include words like “diet” and “weight”. Dr. Pepper Ten could probably still be a successful beverage because it doesn’t use words like diet, instead focusing on being a lower-calorie alternative that (presumably) tastes like a non-diet drink.

But at the end of the day, what Dr. Pepper Ten’s advertising makes me think of – besides, “Come on, aren’t we all smarter than this?” – is the Monty Python “Lumberjack” song, where the manly lumberjack suddenly discusses how much he enjoys wearing high heels and a bra. Yeah, the Dr. Pepper Ten commercials are just like that except, you know, not nearly as funny.

Men: I’m interested in hearing what you think about Ten, and what you think about an advertising campaign that is supposed to be all about you without actually addressing you directly. Does their need to preface words like “calories” with “manly” make any difference? Or do you find the commercials as idiotic and irritating as I?

Thoughtfully yours,
~Sara

Monday, January 2, 2012

An Open Letter to Calphon: The Importance of Operational Definitions

Dear Calphon,

I've been using your products ever since I received a set of Calphon pots and pans as a wedding present, about a year and a half ago. Though there are many things about your products to love - attractive, interchangeable lids to fit every sauce pan and skillet, oven safe - the "non-stick" aspect is laughable. Not only do I have to use obscene amounts of olive oil to prevent my food from enacting a death grip to your product, even then, the food is practically pulled apart as I try to pry it off.

Now, I was about to just post a snarky message on Facebook about the lack of non-stick on a non-stick product and leave it at that, but realized that there must be a logical explanation for the problems I'm having with your product. And, as I thought about it scientifically, I realized what we have here is a difference in operational definitions.

"Operational definitions?", you say, "What are those?" Allow me to explain.

Operational definitions are definitions that allow a concept to measured or manipulated. In research, especially social science research, we often try to study variables that are elusive, like love, intelligence, and aggression. We can't simply hold a ruler up to someone and say, "Their love score is 18." We have to define how we will determine a person's "love score", or IQ, or whatever we're studying, whether that be through a standardized measure, observation of behavior, or some other way. In fact, if you look around, operational definitions are everywhere, because we regularly measure things, even outside of research, that must first be defined.

For example, everyone who lives in the state of Illinois can tell you the operational definition of "intoxicated".

Look familiar?
The signs are posted on highways throughout the state, so we know that the operational definition of intoxicated in Illinois is a blood alcohol level of .08 or higher (and currently, that's the legal limit in all US states, though in the past, there have been some differences in how states have defined intoxicated).

There are other operational definitions floating around out there. For example, the Seinfeld episode in which the gang debated whether soup was a meal involved a discussion of what is (and is not) a meal. And many people have debated what makes something a "date" - for example, what activities should be involved, who should pay, time of day, and so on. Ever been to a social gathering and heard someone say, "This isn't a party. It's not a party unless…"? Those are operational definitions.

A good operational definition should be clear enough that anyone can walk in to your study (or conversation) and, based on the established definition, correctly identify a specific case. Of course, people may disagree on what makes a good operational definition - this is why operational definitions should be discussed and established before beginning a study. And for many variables, there are any number of operational definitions.

For example, blood alcohol level is one way to define intoxicated, but you could also have gone with ability to walk a straight line or say the alphabet backwards. Different operational definitions, however, may cause you come to different conclusions. A person may be classified as intoxicated if they are unable to walk a straight line but sober if their blood alcohol level is .02. (And by knowing the two pieces of information - unable to walk a straight line but blood alcohol level of .02 - we can come to a different conclusion: sober but uncoordinated.)

And this is where our misunderstanding comes from, Calphon. We have different operational definitions of non-stick. Mine is probably something like this:

Non-stick = Food can be removed with no pieces being ripped off

Yours must be something like this:

Non-stick = Food can be removed with great effort and large pieces being ripped off, so that my beautiful goat cheese-stuffed chicken breast looks more like chicken and goat cheese cobbler

See the problem? So based on my definition, your pan would not be considered non-stick, but based on your definition, it would. This is the problem with using undefined words like "non-stick". Now I'm wondering about that whole "oven-safe" bit. There probably isn't any room on your packaging to offer a good operational definition of your terms, but that's all right; you're more than welcome to put that information on your website. It would be most appreciated!

Thoughtfully yours,
~Sara