Showing posts with label intrusive thoughts. Show all posts
Showing posts with label intrusive thoughts. Show all posts

Wednesday, February 7, 2018

Statistical Sins: Olympic Figure Skating and Biased Judges

The 2018 Winter Olympics are almost here! And, of course, everyone is already talking about the events that have me as mesmerized as the gymnasts in the Summer Olympics - figure skating.

Full confession: I love figure skating. (BTW, if you haven't yet seen I, Tonya, you really should. If for no other reason than Margot Robbie and Allison Janney.)

In fact, it seems everyone loves figure skating, so much that the sport is full of drama and scandals. And with the Winter Olympics almost here, people are already talking about the potential for biased judges.

We've long known that ratings from people are prone to biases. Some people are more lenient while others are more strict. We recognize that even with clear instructions on ratings, there is going to be bias. This is why in research we measure things like interrater reliability, and work to improve it when there are discrepancies between raters.

And if you've peeked at the current International Skating Union (ISU) Judging System, you'll note that the instructions are quite complex. They say the complexity is designed to prevent bias, but when one has to put so much cognitive effort into understanding something so complex, they have less cognitive energy to suppress things like bias. (That's right, this is a self-regulation and thought suppression issue - you only have so many cognitive resources to go around, and anything that monopolizes them will leave an opening for bias.)

Now, bias in terms of leniency and severity is not the real issue, though. If one judge tends to be more harsh and another tends to be more lenient, those tendencies should wash out thanks to averages. (In fact, total score is a trimmed mean, meaning they throw out the highest and lowest scores. A single very lenient judge and a single very harsh judge will then have no impact on a person's score.) The problem is when the bias emerges with certain people versus others.

At the 2014 Winter Olympics, the favorite to win was Yuna Kim of South Korea, who won the gold at the 2010 Winter Olympics. She skated beautifully; you can watch here. But she didn't win the gold, she won the silver. The gold went to Adelina Sotnikova of Russia (watch her routine here). The controversy is that, after her routine, she was greeted and hugged by the Russian judge. This was viewed by others as a clear sign of bias, and South Korea complained to the ISU. (The complaints were rejected, and the medals stood as awarded. After all, a single biased judge wouldn't have gotten Sotnikova such a high score; she had to have high scores across most, if not all, judges.) A researcher interviewed for NBC news conducted some statistical analysis of judge data and found an effect of judge country-of-origin:


As a psychometrician, judge ratings are a type of measurement, and I personally would approach this issue as a measurement problem. Rasch, the measurement model I use most regularly these days, posits that an individual's response to an item (or, in the figure skating world, a part of a routine) is a product of the difficulty of the item and the ability of the individual. If you read up on the ISU judging system (and I'll be honest - I don't completely understand it but I'm working on: perhaps for a Statistics Sunday post!), they do address this issue of difficulty in terms of the elements of the program: the jumps, spins, steps, and sequences skaters execute in their routine.

There are guidelines as to which/how many of the elements must be present in the routine and they are ranked in terms of difficulty, meaning that successfully executing a difficult element results in more points awarded than successfully executing an easy element (and failing to execute an easy element results in more points deducted than failing to execute a difficult element).

But a particular approach to Rasch allows the inclusion of other factors that might influence scores, such as judge. This model, which considers judge to be a "facet," can model judge bias, and thus allow it to be corrected when computing an individual's ability level. The bias at issue here is not just overall; it's related to the concordance between judge home country and skater home country. This effect can be easily modeled with a Rasch Facets model.

Of course, part of me feels the controversy at the beginning of the NBC article and video above is a bit overblown. The video fixates on an element Sotnikova blew - a difficult combination element (triple flip-double toe-double loop) she didn't quite execute perfectly. (She did land it though; she didn't fall.)

But the video does not show the easier element, a triple Lutz, that Kim didn't perfectly execute. (Once again, she landed it.) Admittedly, I only watched the medal-winning performances, and didn't see any of the earlier performances that might have shown Kim's superior skill and/or Sotnikova's supposed immaturity, but I could see, based on the concept of element difficulty, why one might have awarded Sotnikova more points than Kim, or at least, have deducted fewer points for Sotnikova's mistake than Kim's mistake.

In a future post, I plan to demonstrate how to conduct a Rasch model, and hopefully at some point a Facets model, maybe even using some figure skating judging data. The holdup is that I'd like to demonstrate it using R, since R is open source and accessible by any of my readers, as opposed to the proprietary software I use at my job (Winsteps for Rasch and Facets for Rasch Facets). I'd also like to do some QC between Winsteps/Facets and R packages, to check for potential inaccuracies in computing results, so that the package(s) I present have been validated first.

Wednesday, May 11, 2016

More About Self Theories: It's Not About Views on Success, But Views on Failure

I blogged recently about self theories, which refer to whether one believes intelligence is innate (fixed) or learned (incremental). As a result of this work, many psychologists have cautioned parents about the type of praise they give their children when they succeed. "You're so smart" implies that intelligence is innate, while "You worked really hard" implies that intelligence/ability can be learned.

However, I learned about a study yesterday that says it isn't parents' views on success, but failure, that impact their children's self theories. They conducted 4 studies, 2 with parents only and 2 with parents and their children. Across all four studies, they found that parents' self theories did not predict children's self theories, but that instead, parent's views on failure predicted children's self theories:
Our findings indeed show that parents who believe failure is a debilitating experience have children who believe they cannot develop their intelligence. The findings further suggest that this is because these parents react to their children’s failures by focusing more on their children’s ability or performance than on their learning. Taken together, our findings seem to have identified a parental belief that translates into concerns and behaviors that are visible to children and that, in turn, shape children’s own beliefs.
It is easier - although maybe not easy - to be watchful about the kind of compliments one gives to children, but when your child has failed at something and he/she is upset, it's more difficult to think rationally about the right response. In those situations, we tend to just react. The problem is, if that reaction is devastation that your child has failed, it sends a message that your child can do nothing to learn and develop his/her abilities. As a result, children may disengage and instead do something "safer" - something at which they know they will succeed. They may miss the opportunity to grow and challenge themselves - challenges which have many cognitive and emotional benefits.

What this means is, if we want to avoid having our children develop a fixed view of intelligence, we have to change our own view of intelligence - not just in what we say. How do we do that? Unsurprisingly, Carol Dweck, who developed the concept of self theories, has designed an intervention called Mindset, though it is aimed at students. But perhaps another way is to adapt an approach used in cognitive therapy and similar interventions: self-monitoring and mindfulness.

In these approaches, the individual monitors his or her thoughts for instances of negative self-talk. They are not trying to suppress those thoughts - because we know that can backfire. Instead, when they encounter those thoughts, they practice acceptance of any mistakes they made that led to that self-talk and may even come up with rebuttals to those negative thoughts. You can read a little more about mindfulness and negative self talk here. In fact, as you'll see in the article, there are many common threads between mindfulness and incremental views of intelligence. For instance, the article states:
People with good self-esteem see mistakes and failures as opportunities to learn about themselves. They take a "beginner's mind" approach - putting aside the judgements and conclusions from past behaviour and actions and, instead, thinking about what they've learned from these experiences.
And for a humorous approach to defeating negative self-talk, you could try this:

Tuesday, May 10, 2016

Your Brain on Stress

Speaking of brain activity and responses, a friend shared this video with me - how stress affects your brain:



This video gives a nice overview of many important brain systems and what they do, while talking about the effect of stress. The video also talks briefly about the epigenetics, the ways in which the environment can trigger certain genes to express. This means that, even if you have a genetic predisposition to stress and anxiety, a nurturing environment can keep that gene from expressing.

An important extension of this concept is the biopsychosocial model, which states that biology, psychology, and social environment combine to determine health across one's lifespan.


Experience can change the brain, though your brain becomes less plastic (changeable) as you age. This is why a small child may recover from a head injury that would be fatal to an adult. The brain is able to rewire itself, especially prior to the age of 6. And while the brain changes discussed in the video are real, they represent a worst-case scenario of stress response. If you experience normal amounts of stress or only occasional instances of high stress, you'll probably be fine. But if you experience high chronic stress, you'll want to do something to cope with that - whether it be talk therapy, lifestyle changes to minimize stress, and/or medications for anxiety.

Thursday, April 21, 2016

R is for (the Theory of) Reasoned Action

I've talked a lot this month about groups, how they are formed, and how they influence us. But a big part of social psychology, especially it's current cognitive focus, is on attitudes, and how they influence us. And as good social psychologists, we recognize the formation and influence of attitudes is determined by others and our perceptions of what they expect from us.

Attitudes are tricky, though. They alone do not shape what we do. In fact, there is a great deal of research on how attitudes are a poor predictor of behavior, known sometimes as the attitude-behavior gap or value-action gap. There are other factors that influence us, that may interact with or even counteract our attitudes. Instead, various forces including attitudes shape what is known as behavioral intention - what we intend to do in certain situations. This intention is then used to predict the behavior itself, recognizing that situational forces may exert their influence between intention and behavior.

Two social psychologists, Fishbein and Ajzen (pronounced Ay-zen), developed the Theory of Reasoned Action to predict behavioral intention, and in turn behavior, with two factors: attitudes and norms. Attitudes can vary in strength - from very important to not important - and evaluation - positive to negative. Norms can also range from very broad, such as societal norms, to more specific, such as norms within your social group. Within that norm factor, there are two subconcepts: normative beliefs (what we think others expect of us) and motivation to comply (that is, do we want to conform or be different?). If we draw this model, it would look something like this:


Not long after publishing on this model, Ajzen decided to build on this theory to improve its predictive power. Thus, the Theory of Planned Behavior was born. This new theory adds one additional component to the old theory: perceived behavioral control. This concept was influenced by self-efficacy theory, and represents a person's confidence in his/her ability to engage in the behavior in question. Perceived behavioral control is influenced by control beliefs, or beliefs about the factors that may help or hinder carrying out the behavior. Each of these three factors not only influences behavioral intention, they can also influence each other. For instance, your own attitude about something can color your judgment of what others think. The degree of control you believe you have over the behavior can influence your attitude. And so on.

When Ajzen drew the model, it looked like this:


Because psychologists recognize that perception can be biased, he also included a box for "actual behavioral control." What we think may not be accurate, and what is actually true may still influence us, even if we fail to notice the truth. Humans are skilled at self-deception.

One important thing to keep in mind if you're trying to predict behavior from attitudes is that specific attitudes are more predictive than general attitudes. Asking someone their general attitude toward the legal system will be far less predictive of how they vote as a juror than their attitude about a specific case. But even when you measure a specific attitude, you may not get the behavior you expect. For my dissertation research, I studied pretrial publicity - information seen in the media before a case goes to trial - and its influence on verdicts. Pretrial publicity is an interesting area of research, especially because no one has really found a good theory to explain it. That is, we know it biases people, but when we try to apply a theory to it, the study still finds pretrial publicity effects but often fails to confirm the theory.

I decided to apply attitudes to the study - very specific attitudes. That is, I hypothesized that pretrial publicity information is only biasing if a person has a specific attitude about that piece of information as indicative of guilt. So, to put it more simply with one of the pieces of information I used in my study: finding out a person confessed is only biasing if you believe that only guilty people confess. I gave participants news stories with one of four pieces of information: confession, resisting arrest, prior record, or no biasing information (control condition).

Then I told them they would be reading a case and rendering a verdict, but first, I asked them to complete a measure of attitudes. These measures are sometimes used during a process known as voir dire, in which potential jurors are questioned to determine if they should be added to the jury. Embedded in this measure were questions about the specific pieces of information. They read the case, and selected a verdict.

The problem is that, like so many other studies before, I found pretrial publicity effects, but attitudes were often unrelated. Even people who didn't believe confession was indicative of guilt were more likely to select guilty when they heard that information pretrial. I was able to apply some different theories to the results, ones related to thought suppression and psychological reactance, concepts I've blogged about before. But I was quite disappointed that I still couldn't fully explain what I was seeing.

Like I said, attitudes are tricky.

Tuesday, March 1, 2016

The Province of Men: Gender Bias Research on the First Day of Women's History Month

March is Women's History Month (read more about it here ). So it may be for this reason that one of my psychology list-servs sent out several links to recent research on gender disparities.

Two used freely available data to examine differences in perceptions about the quality of women's work and value of products sold by women.

The first study (summary and full preprint) examined open source code on GitHub, and found that code written by women was more likely to be approved by users than code written by men, but only when gender of the coder was hidden. Specifically, the approval rate was 78.6% for "blind" code written by women (compared to 74.6% for code written by men) but was only 62.5% for code written by women when gender was specified on the user's profile.

The second study found that products sold on eBay by women received fewer bids and lower final prices than the same products sold by men. Unlike GitHub, gender is not available on a seller's profile, but a supplemental study by the researchers found that people could identify gender of the seller based on copy in the product posting.

Another examined the way women are perceived when they seek out positions of power or promotions/raises. They begin with Hillary Clinton as an example case (though, as they are quick to point out, a very unique one). That is, when Clinton stepped down as Secretary of State, her approval rating was 69%. Contrast that with current sentiment about Clinton, including mistrust and the perception that she is loud and angry. The author argues that approval of Clinton drops when she seeks a new position and "soars" after we've seen her do the job. It's the difference between assertive men being called a "boss" and assertive women being called "bossy" - one is a compliment, the other is harsh insult.
As I've blogged about before, personal characteristics like gender can be used to interpret behavior and evaluate an individual, particularly when you don't have other important information to make that evaluation (such as expertise necessary to determine what "good" looks like, or personal knowledge of the individual that outweighs these details). And any evaluation using these characteristics will draw upon stereotypes about how a member of that group is "supposed" to act. Correcting for these stereotypes is liable to be difficult, if they are firmly ingrained, and suppressing this information can result in a stronger reliance on that information (see a previous blog post about thought suppression).

So what do we do with this information? The first study, on GitHub submissions, would suggest that one correction for gender bias is simply to remove gender from the equation, by blinding evaluators to this information. Of course, this 1) isn't feasible in many cases, such as in politics, and 2) does nothing to change stereotypes about gender and expertise. We need strong women. We need women who excel in stereotypically male fields. And we need to change the way we evaluate and interpret women's behavior. Unfortunately, there's no quick way to do that, and like many cultural changes there will be some growing pains.

The only way is to keep pressing forward and pushing boundaries whenever possible. There is a compliance technique in social psychology and business that springs to mind: foot-in-the-door technique, which involves getting a person to agree to a large request through smaller, incremental requests. As the individual goes along with the small requests, that get larger over time, they may experience a change in attitude, through processes like cognitive dissonance. Each small push toward large-scale social change can bring about attitude shifts. One could argue that any policy change (women's suffrage, civil rights, changes to marriage laws) is obtained through small requests that grow over time. But for a change this large, no single person can carry the burden.

The big question, of course, is what will happen in the 2016 Presidential Election. I certainly would never tell anyone who to vote for, and would not suggest one vote for Hillary because she is a woman. But neither should people continue to negatively evaluate a candidate simply because she is a woman.

Tuesday, February 16, 2016

Race, Stereotypes, and Implicit Attitudes

It’s no secret that I love social psychology. I’m constantly fascinated by new findings, as well as the classics, that help us to understand human behavior. At the same time, some findings show the dark side of human behavior, and show that even the most inclusive, open-minded person may possess attitudes they would not consciously agree that they hold.

A recent article published in Psychological Science made its way into my inbox last week. The article discusses two studies. In the first study, participants were first exposed to a prime (black or white boy, approximately 5 years old), then an object participants had to categorize as being a weapon or toy. This was done 12 times, with 6 photos of African-American boys and 6 photos of Caucasian boys. Afterward, participants rated age and race of the faces, as well as how threatening the face seemed. Participants identified weapons more quickly after exposure to African-American faces, and identified toys more quickly after exposure to Caucasian faces. A second experiment was identical to the first, except that photos of adult men were also included. For photos of adult men, images of tools replaced images of toys. Participants identified weapons more quickly after Black primes, and identified tools more quickly after White primes.

True, this is just one recent study. But the finding, and the methods, have long been established. Early work by Gordon Allport, who in a sense wrote the bible on prejudice, actually started as the study of rumor. His study 1947 study with Leo Postman involved a drawing of a well-dressed African-American man speaking with a Caucasian man who is holding a razor.


Though the results of the study are often misreported, in some trials of the study, where participants were asked to identify the individuals involved before recalling the events, a little over half of participants misremembered the razor being in the African-American man’s hand instead.

Allport continued in this line of work, postulating that prejudice originates from people’s need to generate categories in order to quickly understand others and navigate the social world. In fact, placing groups of people in mental “buckets” along with certain traits and characteristics is how stereotypes get started to begin with.

Even people who don’t necessarily believe stereotypes are true are aware of stereotypes about certain groups, and these stereotypes can be automatically activated in the presence of group members. This research was pioneered by social psychologist, Patricia Devine, who established that stereotype activation is automatic, and it takes conscious effort to downplay those stereotypes and keep them from influencing our behavior. (Read the original paper here.) Recently, Devine also suggested that ‘gaydar’ is actually the use of stereotypes to infer a person’s sexual orientation.

Shortly after Devine’s work, Anthony Greenwald and Mahzarin Banaji established the distinction between explicit attitudes (attitudes you consciously hold) and implicit attitudes (nonconscious attitudes manifested as automatic associations).  Implicit attitudes are generally measured through reaction time, in a task very similar to the study described above. Project Implicit, operated through Harvard University, offers multiple implicit attitude tests (or IATs) that measure nonconscious attitudes about a variety of groups - everything from race and gender to political parties and age groups.

Which brings us to where we are today. The important thing about Todd, Thiem, and Neel’s study is to demonstrate that, not only do people recognize weapons more quickly when associated with African Americans instead of Caucasians, but that this effect is true even with 5-year-old children. Obviously there are many important implications of this research. The question, then, is what do we do about it?

Saturday, August 8, 2015

Don't Think About That: Chicken Sandwiches, White Bears, and Coping with Unwanted Thoughts

For one of my recent blog posts, I wrote about inspiration and touched upon some of the nonconscious processes that influence our behavior. It seems a common misperception that, outside of clinical psychology, psychologists don't believe in the existence of nonconscious processes.

This may be thanks in part to the strong influence of early behaviorists, who could only study behavior that was outwardly expressed (rather than "internal" behaviors, such as thought), and therefore took the radical perspective that activities within the brain are irrelevant, and perhaps even nonexistent. (The founder of radical behaviorism, B.F. Skinner, who I've blogged about many times, once likened cognitive psychology to creationism.) However, nonconscious processes are key concepts throughout psychology, because so many influences on people occur outside of their awareness.

I also talked in that post about when and where I get some of my best ideas - at night, often following a dream. Another place is in my car. This is not a good place to write down an idea necessarily, but I've recently begun using my voice-to-text feature on my phone to jot down thoughts, even full paragraphs, for whatever I'm writing. Recently, after having a bit of writer's block at my work computer, I was able to finish a paper I was working on during my drive home.

Today, as I was listening to the radio, I heard a commercial for Wendy's spicy chicken sandwich. While this may seem a strange inspiration for a blog post, I really enjoyed the announcer insisting that we not think about that tasty sandwich, because it reminded me of a couple of social psychological concepts I used in my dissertation research: thought suppression and ironic processes. These two concepts basically amount to the same thing with one key difference: conscious versus unconscious.

The basic premise of these two ideas is that being instructed not to think about something makes you think about it more. For example, to use one of the first examples in this research, try not to think about a white bear.

Or sex. Or a white bear having sex. (That was my intro psych professor's favorite example, so you can thank him for that image. Thanks, Dr. Flaherty!)

Or my favorite example: Don't look down. Really. Don't do it. People say this all the time, at least in movies, and the first thing the character does after hearing it is look down.

For example...


So why does this happen? Why does being told - or even telling yourself - to ignore something make you think about it more? As I said before, there are two potential explanations.

Ironic process theory states that when people are told to ignore certain information, two types of mental processes are activated. The first type, conscious processes, aims at reaching the desired mental state, by thinking about what the person has been told to think about (e.g., looking up). The second type, unconscious processes, monitors thoughts on the undesired mental state, by looking for thoughts on what the person has been asked to ignore (e.g., do not look down). In trying to not think about something, the person ends up thinking about it more, without even realizing it.

Thought suppression occurs when a person actively (consciously) tries to suppress a particular thought (e.g., do not think about looking down), causing the thought to become more prominent, especially if the information elicits strong emotions. In both of these cases, the individual is trying to avoid thinking about something, but ironic processes are unconscious while thought suppression is conscious.

These effects obviously have significance beyond "not looking down" or trying not to think about a tasty sandwich. They explain any number of intrusive thoughts: Worry. Fixating on life problems. Obsessing over an ex-boyfriend. Telling yourself not to think about it can often backfire. And if the issue is something that makes you very upset, the effect is even stronger.

So what works? There are a variety of methods, and which works best depends on the nature of the thoughts. If the thoughts are about a problem, think about whether it can be solved. If so, work on a plan for how to solve them and focus your mental energy on that. Then focus on carrying out the plan.

If there is no solution, or the potential solution is not adaptive (e.g., dealing with obsessive thoughts about an ex by trying to get back to together), the focus should instead be on coping and lessening the strong emotions associated with the thoughts. Distractions that take up mental energy in a positive way also might help. Talking to someone, especially a psychologist or social workers, can help you work through your feelings and identify patterns that make the thoughts more likely to occur.

More serious problems with unwanted thoughts might be a sign of an anxiety disorder, and talking to a psychologist or social worker, as well as potentially a psychiatrist for anxiety medications, might be a good idea. The indicator of whether something is a serious mental health issue is if it interferes with daily life. If the intrusive thoughts keep you from doing your job, paying attention in school, or maintaining relationships with friends and family, that's a sign that you should seek help.

Everybody experiences unwanted thoughts. Some are merely annoying, some are more serious. And it takes a lot more than telling yourself, or someone else, not to think about those things to stop them. Talk to someone. Start a blog. Or maybe just let yourself have that chicken sandwich.

Thoughtfully yours,
~Sara

P.S. - I used these theories in dealing with ignoring inadmissible evidence in my dissertation. You can read more about it in my dissertation, available here.