Friday, December 30, 2011

"That's Not Fair": Sushi, Notions of Justice, and Student Grades

Since human beings first developed the ability to speak, one of the most frequently uttered phrases has to be “Not fair!” Fairness is such a basic concept, even small children (and people who just behave like small children) discuss it at length. Like many topics I discuss on this blog, notions of what is fair and unfair are largely shaped by our experiences. In fact, even the idea of “fair” is a social one. There’s really no way to divorce fairness from the social context - it is a core component in all social interactions. Even so, people have different ideas of what is (and is not) fair. I was thinking about this the other day, as my husband and I were sharing a sushi roll.

This very special roll - dubbed the 2012 roll (Happy New Year, by the way!) - contained yellow tail, asparagus, “special” seaweed paper, and various sauces, and each of the 8 pieces were topped with one of 4 kinds of fish roe: masago, red tobiko, black tobiko (my favorite), and green tobiko. I figured: 4 kinds of roe, 8 pieces - enough for us to each have 4, one with each kind of roe. So when my husband just starts picking up pieces without regard to the fact that he just grabbed a second red one, my first thought was, “Hey, not fair!”

And right after I think this - and say, “Dude, you already had a red one.” - I stop and consider, “Why is this idea of equality so important to me? It’s just sushi.”

While justice can be evaluated differently by different people, based on their perspective, research on justice shows that there is some consensus in how people evaluate justice. There are two (possibly three) overarching perspectives that people might use at different times, and that fit into the different theoretical frameworks: distributive justice, procedural justice, and (possibly) interactional justice.

Adam Smith’s social exchange theory, which inspired some of the earliest research on distributive justice, states that people evaluate fairness of an outcome by creating a ratio of outputs to inputs (basically, how much work I had to do and how much I got in return), then comparing that ratio to the ratio of another’s outputs to inputs. If the evaluator’s ratio is equal to the comparison ratio, then the outcome is fair. Smith stated, however, that this process is still subjective, and that many biases can influence the values in one’s own ratio and in the comparison other’s ratio, especially because we may have really skewed ideas of how much work another person puts in.

Find a funnier cartoon?!  I don't have time for anything but Google Image search.
 In general, distributive justice deals with what types of distributions of outcomes will be perceived to be fair. There are three different rules that can be applied when distributing outcomes among parties. Fairness of the outcome is determined by the rule applied. The first is equality, in which each party receives an equal share. The second is equity, where an individual party’s share is based on the amount of input from that party; this is sometimes referred to as the merit rule, and is based on social exchange theory. Finally, the last rule is need, where share is based on whether the party has a deficit or has been slighted in some other distribution.

The preferred rule is influenced by many things, including the goals of the distribution. Equality is often chosen when group harmony is the goal. Equity rules are preferred when the goal is to maximize contributions. Need rules are used when group welfare is a concern, or when resources are limited. The context of the distribution can also have an influence. If outcomes are distributed publicly, the equality rule is usually preferred, while outcomes distributed privately often lead to use of the equity rule. The degree to which the perceiver contributes (or believes he contributes) can also influence preference; high contributors prefer equity and low contributors prefer equality.

Distributive justice was the major justice construct until the mid-1970’s, when procedural justice was introduced. Fans of procedural justice argue that people prefer fair procedures because they believe they will lead to fair outcomes, and that as long as people believe the process of allocating resources was fair, they’ll be fine with the outcome. Any teachers out there can probably tell you this is not always the case.

"I showed up.  That should be worth at least a B, right?"
In fact, not all researchers agree that procedural justice is more important in evaluating justice. Hegtvedt (2006 - an excellent book chapter on justice frameworks, which you can read here) argues that procedural justice may only appear to be more highly valued because 1) procedures are easily interpreted and 2) individuals may lack the information necessary to compare outcomes across group members. When outcome information is available, people will focus more on that information.

Some people have identified another type of justice called interactional justice. Interactional justice is made up of two parts: Interpersonal justice is the degree of respect and dignity demonstrated in the procedure, and informational justice is the sharing of information on process and the distribution of outcomes. Bies and Shapiro (1987 - abstract here), for example, refer to the journal peer review process as an example of how lack of interactional justice can influence whether the outcome is perceived to be fair. Such processes are often lengthy and the responses from reviewers, condescending (see previous post). Some people call the process, and the outcome, unfair, while others do not. The authors argue that the difference in responses could be due to the explanations offered by editors; if editors provide a good reason for the delay and reviewers’ responses, authors may believe they have been treated fairly.  Of course, the reason that some argue there are only two justice perspectives is because they believe interactional justice is simply a subdivision of procedural justice.

Anyway, my husband and I ended up evenly dividing the sushi, though he told me later he would have happily traded two black tobiko rolls for two red. Bartering, hmm? Another post, another day.

Thoughtfully yours,
~Sara

Thursday, November 24, 2011

On Thanksgiving and Gratitude

Every year on Thanksgiving, we are called to give thanks for the good things we have... and of course, gorge on lots of food. I'll resist the urge to write about overeating and focus instead on what the holiday is really about: gratitude.

Though in the past, the field of psychology has focused on maladaptive behavior, the expansion of various subfields of psychology beyond clinical psychology has led to the study of a variety of behaviors, both good and bad, and to focus on, not only the things that make us mentally ill, but the things that make us healthy, happy, and fulfilled. The study of gratitude is one area studied by so-called positive psychologists.

There are certainly individual differences in ability to feel gratitude; some people are simply more grateful than others. (You can find out more about your "trait gratitude" by taking this measure). But social situations, like Thanksgiving, can also influence your minute-to-minute levels of gratitude (or "state gratitude").

Gratitude, unsurprisingly, is strongly associated with psychological well-being, happiness, and life satisfaction (find two full articles about this here and here). Feeling gratitude reduces stress and positive coping; these positive benefits are observed even when people are randomly assigned to an intervention meant to increase their gratitude (read one such experiment here), meaning that these benefits can be reaped by anyone, not just people are are "naturally grateful".

Being the target of gratitude is also beneficial. Being told "thank you" makes a person more likely to repeat the behavior in the future, probably because it functions as a reward (and as I've said before, if a behavior is rewarded, it's more likely to occur again). So even if you feel like someone is "just doing their job", saying "thank you" can make him or her feel more motivated to repeat that behavior in the future and will likely improve your future interactions with that person, as well.

So keep feeling that gratitude, today and everyday - it's good for you. Happy Thanksgiving everyone!

Thoughtfully yours,
Sara

Wednesday, November 16, 2011

Your Brain on Smells: Memory, Emotion, and Scent

In my approximately 30 years on Earth, I have developed many allergies. Some I've had since the beginning (e.g., lactose intolerance), others I discovered much later (e.g., aspartame, the chemical name of Nutrasweet). While I would love to explore what the heck is up with all these crazy allergies, I'm instead writing about what happened as a result of my latest allergy discovery. I recently learned that I'm allergic to an ingredient in a product I use pretty regularly (for the sake of brevity, I won't go into detail); this ingredient is so commonly used in this product that to get a product free of this stuff, I had to go to Whole Foods.

First of all, never go grocery shopping hungry. I've been told this before, but had to break my rule this time because of scheduling constraints. Second - and this rule is even more important than the first - never go to Whole Foods hungry - ever! Going to my regular grocery store hungry is bad enough; everything looks so appealing and tasty. Whole Foods is something else. Not only is the store very visually appealing, it smells how I think Heaven will smell. When you walk through produce, you smell the vegetables. The fish smells like fish (the good, fresh kind - the way fish is supposed to smell). The cheese section... need I go on?

Not only did I want to eat everything in sight, I savored the smells so much that I think I fell in love. Yes, I might have fallen in love with Whole Foods.

This, of course, got me thinking about psychology. But then, everything makes me think of psychology, so perhaps we should be more concerned if I walked out of Whole Foods thinking nothing more than, "I'm in love."

Our brains are fascinating. I really mean it. Our brains are just about the coolest invention ever. Not only are they highly efficient, processing machines (that definitely make important, but predictable, errors), so many of the systems are interconnected in really amazing ways. The connection among smells, memory, and emotions is one example.

To really briefly summarize, the lowest parts of our brains are the parts that developed (evolutionarily) first. They handle the basic functions: breathing, sleeping/waking, etc. These very basic functions are handled by parts of the brains directly connected to our brain stems. As you get farther up in the brain and away from the brain stem, you get to the higher functioning systems that developed last. Our olfactory bulb, which is involved in perception of smells, is on the under part of our brain, close to our nose. So one of the first systems to develop, but slightly higher up the chain than breathing.

The olfactory bulb is the yellow structure above the nasal cavity.
Because of the location of the olfactory bulb, it is closely tied into the limbic system, a region in the middle of your brain that contains (among other structures) the hippocampus (involved in storage of memories) and the amygdala (involved in emotion) - the reward pathway I discussed in my very first blog post resides in this region.

It should come as no surprise then that emotions, memory, and smells are closely related, and that stimulation of one of these systems (such as the one for memory) can activate another system (such as emotion). Certainly, memories elicit emotions (you remember an event that made you happy, and you feel happy again), and emotions can elicit memories.

But what about smells? Ever smell something and suddenly find yourself thinking of an event from childhood? Pumpkin pie, turkey, certain candies - these all remind me of holidays at home and feeling happy. Certain flowers, particularly those in my bridal bouquet, remind me of my wedding day.

Which is probably why I felt this strong feeling of love. As I was entering Whole Foods, I smelled the exact flowers from my bouquet. And of course, being a foodie, the other fantastic food smells certainly gave me something to savor. In the words of Jim Gaffigan, "I like food... a lot." All of these wonderful emotions, memories, and smells combined to make me think I love Whole Foods.

Wait, you mean I'm not actually in love with Whole Foods? What am I going to do with all these love poems?!

Thoughtfully yours,
Sara

Friday, November 4, 2011

On Publishing, Perishing, and Post-Docing: A Reaction to Diederik Stapel's Confession

One reason I started this blog was as an outlet for my writing. I've always loved writing, and often considered it for a career (in those fleeting moments when I was really proud of something I had written and thought, "Yeah, I can do this forever"). I was constantly penning short stories, creating characters and writing notes to my friends in which they played a prominent role (or sometimes were the authors of the notes themselves). I've written many plays: one acts, two acts, I even had the outline of a three act modern tragedy that I still think of going back to - my Citizen Kane or Death of a Salesman (yes, I know I'm being overly dramatic: as a formerly theatre person, I have the flair for drama, and as a psychology person, I'm painfully self-aware of that and all my other traits).

Of course, I changed my major in the middle of my first semester at college, from theatre to psychology, not realizing that, if I thought getting published as a fiction writer was tough, it was nothing compared to getting published as a psychology researcher. Publish or perish is the expression in my field, and it is accurate. Getting the best jobs, getting research funding, it all depends on having a strong publication record. And with more people earning higher degrees now, there's even more competition. This is one reason the number of PhDs going into post-doc positions has also increased recently; grad school alone is no longer enough to prepare most people for the most attractive research and academic positions.

My number one goal in my post-doc is to publish as much as I possibly can. I even submitted a paper today. But I can't rest on my laurels, because I've got 5 other papers in various stages of preparation. Though my most recent reviews may still sting (and I'm not alone - there's actually a group on Facebook devoted to Reviewer 2, often the most matter-of-fact and even rude of the group) I can't let it traumatize me for too long, because there are more studies to perform, more data to analyze, more papers to write.

That's why when I read an article in the New York Times about a prominent psychology researcher who admitted that he massaged data, made up findings, and even wrote up studies that were never actually performed, and published it all in prominent journals, I was a bit annoyed. Am I bitter that while I was dealing with snide reviewers insulting my intelligence, research methods knowledge, and mother, this guy was fabricating data, falsifying methodology, and just plain making whole studies up (and getting rewarded for it, albeit not purposefully)? In a word: yes. But, no matter how tough the publishing world was, the possibility of doing what this guy did was never even an option. It's not that I thought this sort of thing doesn't happen; we all know it does, just as we know there are students who hire people to take the SATs or write their theses for them.

I know I'm not the only one who can say that this wouldn't be one of my answers to the difficulty of publishing in this field, and it's not because of a lack of creativity. Whenever we write research proposals, we have already have to write the introduction/background and methodology sections; we sometimes have to write an expected results section. Make that "expected" part disappear, add some statistics, illustrative quotes, whatever, then finish with a discussion/conclusion and voila! Made up study. And if you're in a field or at an institution where it's normal for someone to conduct and write up a study all by his- or herself, who will ever find out?

Well, apparently someone did, because this guy was caught and confessed, and the whole thing was written up in the New York Times. You can perhaps understand his motivation, and there are surely countless other researchers who have done the same thing and never got caught. And if you're a bit sly about it, your chances of getting caught will likely go down further. So what makes the people who would never do such a thing different?

Anyone who has taken an introductory philosophy class - or who has seen the movie Election - can tell you the difference between morals and ethics. For those who fall in neither of those groups: Morals are notions about what is right and what is wrong. Ethics often refers to the moral code of a particular group, and it sometimes is used to describe what is considered right and wrong at someone's job or within a certain field. That is, if we say a study was conducted ethically, we mean generally that it was performed in a way to minimize unnecessary harm, but more specifically, we mean that an overseeing body examined it and decided it abided by the rules established by some even higher-up overseeing body. Psychological ethics clearly say that falsifying data is wrong; it's unambiguously stated. Stapel can't plead ignorance here.

Sorry, my moral compass appears to be broken today.  I'll have to get back to you tomorrow.
But not everyone avoids doing something because it's wrong. People are at different stages in their moral development; for some the possibility of getting caught is their deterrent. One of the most well-known theorists on moral reasoning is Kohlberg, who (while a post-doc at University of Chicago) began developing a taxonomy of six developmental stages. The first two stages apply to children; in the first stage, people are motivated by seeking pleasure and avoiding punishment, and determine morality by what an action gets them in return. Similarly, stage 2 individuals are driven by self-interest and in actions that further their own goals, needs, etc.; these people behave morally toward others when it also benefits them. 

As we move into adolescence and adulthood, we also move into stages 3 and 4. In stage 3, people begin fulfilling societal roles, and behave in ways that conform to others' expectations; it seems the motivating principle here is they, as in "what would they think?" In stage 4, morality is based on legality. Finally, some lucky few move to stages 5 and 6, which Kohlberg considered the highest levels of morality. These individuals are no longer motivated by pleasing others, what is legal/illegal, or even self-interest; instead, they develop universal principles of what is right and wrong, and seek to enforce those principles, even if it means breaking the law or sacrificing their own needs.

But perhaps what it really comes down to is why one became a scientist at all. I like to think I went into this field because I was good at it, but then there are other things I'm good at (perhaps things I'm even better at than this), some that I could have potentially built a career around. I find the field to be challenging, but once again, there are other interesting and challenging fields I could have pursued. As cheesy as it sounds, I really want to make the world a better place and I see my field as one approach to reaching that goal. I'm sure Diederik Stapel had similar reasons for going into this field. Somewhere along the way, that motivation got lost, or at least overpowered by the drive to publish (or perish).

How can we keep people from getting to this point? How can we reward scientific integrity, even if it means fewer publications and a less attractive CV? And most importantly, how can we verify a researcher's findings are valid?

Thoughtfully yours,
Sara

Saturday, October 22, 2011

Still Too Pretty to Write this Blog Post: Gender and the STEM Fields Revisited

Just a quick post to revisit a topic I've covered before. One of my past blog posts was about two articles, one covering a controversial t-shirt marketed to young girls and the other discussing a study of men's and women's spatial ability in two cultures.

I just read an article about women in science that also provides some support that women are just as capable as men if given the right environment in which to thrive. It's interesting, though, that the professor in charge of the lab discussed in the article, worries that cultivating an all-women lab (at least by accident) may be as negative as the "old boys" labs of the past.

Some research has found that single gender classrooms are actually beneficial for both male and female students. Of course, at what point should integration happen (because it will have to happen eventually, unless you plan on the workplace also being divided)? And are there any long-term negative consequences associated with single gender classes? Does it make it difficult when the student finally encounters a member of the opposite gender in an academic setting? Or do these students, because of the lack of variability in gender in their classrooms, never learn that gender might be related to academic skills?

It seems, though, that Professor Harbron has every reason to be concerned. After all, even though you could argue, "Male students just don't seem to be interested in joining her lab, so why should we force them?", that argument has been used for a while to rationalize doing nothing to deal with many female students' lack of interest in the STEM fields.

I'm all about encouraging people "follow your dreams", but at some point, we have to recognize the powerful outside forces that can influence those dreams. As someone who discovered a love of math later in life, I wish I had had someone to help me with my struggles and push me to keep trying. In fact, it seems to me, the best way to encourage students to follow their dreams, is to get them to try everything and hold off on deciding what they want to do as their career until it is absolutely necessary for them to decide.

Yes, I know that sounds kind of counter-intuitive, but hear me out. In many other countries, students are tested early on to discover what they're good at. At some point, educators determine what Joe Student is good at, and begin training Joe in that discipline. Sure, our system is not as structured as that. Even if Joe is good at a certain thing, Joe can choose to go into another discipline all on his own. Still, once Joe has decided what he likes, we direct him toward activities and classes that will get him to his goal. And, if we think Joe is making a bad choice, we may try to direct him toward something else. But, if instead, we give Joe a taste of all his options without influencing him toward one field over another, and keep doing that until it's time for him to decide, who knows?

You may be saying, "We already do that." But do we? If Jane Student expresses an interest in math, do we encourage her with the same vigor as we do Joe? Do we place equal value on all the different options students have, or do we make casual statements that direct students to one option over another (by suggesting one option is better than others)? If we can get rid of preconceived notions about who is suited for a certain field (and who is not), we can create an environment where students thrive. Perhaps Professor Harbron is right that her lab is no more ideal a set-up than the all-male labs from before. But by examining her lab, and other educational environments, maybe we can discover the best approach.

Thoughtfully yours,
Sara

Sunday, October 9, 2011

The Need to Personalize: Why Consumer Data is More Important Now than Ever Before

As a long-time researcher, my answer to many questions is, "What do the data say?" I consider myself to be a very empirical person, so having data to support views or my approach to life is very important to me. Even in parts of my life where no data are available, I continue asking questions. And, like most people, I constantly observe other people and draw inferences from their behavior. So when I read about some of the cutting edge work the Obama campaign is doing with supporter data, I wondered, "Why aren't more politicians doing this?" and more importantly, "Why aren't more people doing this?"

I'll be the first to say that too much personalization is not necessarily a good thing. For one, I'm really not a fan of design-your-own-major programs and would be happy to go into the "whys" of that sometime. But when it comes to marketing or informing people about causes they can join, personalization is an excellent idea. In fact, it's the logical continuation of what market researchers have been doing for years.

When a company creates a new product, designs a new ad campaign, etc., they want to get consumer perspectives. They do this through things like focus groups, where a group of similar people are brought in to try out a new product or view a new ad and discuss their thoughts as a group (I also frequently use focus groups in my research - you can get a lot of really useful information and they're fun to do!), and survey research, where people may (as an example) hear a new product slogan and rate it on a scale.

Market researchers also often collect demographic information from their participants, things like age, gender, race & ethnicity, and education, to see how these factors relate to responses to the product. This gives some basic information on who is likely to buy your product, and what approaches those groups respond to. A company who wants to appeal to many demographic groups may develop more than one ad campaign and put certain ads in places likely to be seen by certain groups. If you want to see some basic personalized marketing in action, grab a couple of magazines, say a fashion magazine and a sports magazine. Take a look at the ads in each - the ads in the fashion magazine may be for different products than the ads in the sports magazine. Not only that, you may notice that even ads for a product found in both of the magazines are different in terms of color scheme, layout, and writeup. You'll probably even notice different people featured in the ads.

The same is true for advertising during certain shows. Market researchers know what kinds of things their target demographic likes to watch on television and will buy ad space during that time.

Of course, I call this "basic" because it's not really personalized to one specific person; it's aimed at a specific group who have some feature or features in common. But advances in technology have made it even easier to gather information about a specific person, and in some cases, deliver that personalized advertising directly to that one individual. Google has been doing this for years. Facebook is also doing more and more of this targeted marketing. Using data from people's search terms or status updates, specific ads are selected from a database and displayed to that person.

Why is this personalization such a good idea? People respond to (e.g., process) things that are personally relevant more quickly. Research shows that people, for instance, show stronger preferences for the letters in their own names, probably because these letters are more familiar and therefore more fluent (easier to process - discussed in a previous blog post). When we're feeling especially miserly with our cognitive resources and are operating on auto-pilot, such highly personalized information can still "get in" and be noticed and processed by the brain.

Personally relevant information also appeals to our sense-of-self, our identity (also discussed in a previous blog post). We view the world through the lens of our personal experiences and details; some people may be better at considering another person's viewpoint than others, but we can never be separate from the self, so our view of the world is always going to be self-centered (and I use that term in a descriptive, rather than judgmental, sense).

Even in theories developed in fields like psychology, we recognize that the perspective of the theorist receives a lot of weight (this is why following the scientific method to develop testable and falsifiable hypotheses and gather empirical data, is so important; it's also one reason why theories are developed by many studies on a subject, hopefully performed by more than one individual, and no single study is the end-all, be-all on the topic).

I remember reading a quote in college by a great psychologist about Freud, and after much searching, could not uncover the quote or the source, but the person (I want to say Gordon Allport, who met Freud briefly when he was 22 and became disillusioned with psychoanalysis as a result) essentially asked of Freud's theory, "What kind of little boy would want to be so possessive of his mother?" - that is, he suggested that Freud's theory, specifically its emphasis on the Oedipus complex, was more about Freud himself than normal human development.

These days, individuals are doing things with computers that were once only reserved for the nerdiest of super-nerds sitting in front of a computer the size of a small house.

And if we leave it running all night, we could have our checkbook balanced by tomorrow morning! Who has the punch cards?
The people who are embracing today's technology to personalize content are the ones who will be sticking around as the market becomes more and more saturated. That's why I would argue that this kind of personalization is so important - there are almost 6.8 billion people on this planet, nearly all of whom will make their livelihood off of selling something (whether that something is concrete, like a product, or abstract, like a skill). As the population continues to grow, getting even a minuscule percentage of that total to see what you have to offer and show an interest in buying it, is going to take some serious data-crunching ability.

If you sell a product, any kind of product, you should be collecting data on your customers and using their information to make important decisions. And if you've got any kind of computational and statistical know-how, work those skills, because you are going to be sorely needed as the market continues moving in this direction.

True, some people are just born to sell things - they can convince you that you need this product, that your very life depends on having this thing. We can't all be Steve Jobs, walking into a room and presenting the product with such flair and resonance that we all suddenly wonder how our life was ever complete before iPod. (Years from now, when people look at the products Jobs was responsible for and think, "Oh, a personal music player, how quaint", they're still going to be watching and dissecting his presentations to find that magic formula.  If only it were that easy.)

And perhaps part of Jobs's genius was that he could do some of this data-crunching in his head, examining how people are responding to his presentation in real-time and making subtle shifts to bring the message home and get people on board. Few people possess that ability.  But for the rest of us, we can perhaps get by with a little help from our data.

Tuesday, October 4, 2011

Whatever They Offer You, Don't Feed the Trolls

You may have noticed that I talk a lot about the media on this blog. The media is one of my main research interests - one that I've retained despite respecializing as part of my post-doc. I find it fascinating. Media information is everywhere and, as people become more and more connected through the Internet and mobile devices, its influence is only likely to grow. Though media research has been conducted since the early 20th century, one area that has really taken off is research on Internet media. This is not only because the Internet is becoming people's main sources for information, but also because the Internet is inherently different from other forms of media as well as constantly in flux.

Even with reality television taking off like it has, it's not easy to get onto television. Movies, music, and similar forms of media are not as easy to get into either. You need things like talent, good looks, and connections (unless you're Shia LaBeouf - in that case, your celebrity is inexplicable). The Internet, however, is one big free-for-all. Anyone can get online and create a webpage, post a music video on YouTube, start a blog ;). Thanks to the Web 2.0 movement, the tools are available to allow anyone, regardless of technical know-how, to get his or her message out there. Because of the ease with which individuals can add content to the ever-growing World Wide Web, the Internet is constantly changing. New information is becoming available, and new tools are being created to allow individuals even more control over content.

Obviously, there are many aspects of the Internet that are worthy of further exploration, but today, I'd like to write about an Internet phenomenon that has been around probably as long as the Internet itself, and is only becoming worse thanks to Web 2.0: trolling.

They may look cute and cuddly, but it's best to ignore them.
Last month, BBC news featured a story about trolling and a few cases in which people were arrested and jailed for trolling. In these cases, the trolling was really over-the-top bad: for example, a young man posting really thoughtless remarks on a memorial website for a young woman who was killed. Still, websites are cracking down on trolling in a variety of ways, such as by requiring comments to be approved before appearing on the site. Some argue that simply requiring people to register should be sufficient, because people are no longer anonymous.

The argument is that people troll because they are "deindividuated" in a place like the Internet: they can shed their usual identity and adopt a new persona, which research suggests can lead to bullying and outbursts of violence. This is the phenomenon behind "mob mentality", where people in a large crowd can begin engaging in antisocial behavior, such as vandalism, physical assault, etc. So take away the opportunity to hide one's identity, and problem solved, right?

Yes, I tend to ask questions right before I'm about to argue the opposite. I'm totally predictable. :)

Let's be honest, other than my friends who read this blog (and perhaps my entire readership is made up of my friends, but hey, that makes you guys great friends :D), do you honestly know me? I'm not anonymous here; you can find out my name, my general location, my education and background. I drop autobiographical statements here and there. Still, your entire experience with me is online. I could be totally different here than I am in real life.

So what's to stop me from adopting a new personality entirely for my online interactions, that differs markedly from the "real" me? And what's to stop me from adopting the persona of a thoughtless jerk who trolls message boards trying to get a rise out of people? (This is just hypothetical, BTW; I have never, say, inferred a singer sucked on a comment board of their YouTube video… or anything like that.)  Honestly, even on a site like Facebook, where it's super-easy to figure out who I am (and getting easier each day), I'm more apt to call someone out on something I would never utter in person.

I suppose if someone says something truly awful, requiring registration would make it easy to track them down for disciplinary (and even legal) action. But just like other forms of bullying and harassment, there is always a gray area where the behavior, though repugnant, is not punishable. Even online behavior that has led to a person's death went unpunished for fear that it would lead to a precedent that could make creating false online identities illegal. And as this case showed us, even requiring a person to register doesn't guarantee they are who they say they are. And requiring comments to be approved before appearing leaves too much room for bias; can't the moderator simply choose to accept the comments with which he/she agrees and reject the rest?

Perhaps the issue, then, is not deindividuation, but distance from the target.  Stanley Milgram, who conducted one of the most unethical (aka: most awesome) social psychology experiments ever, found that people were more likely to follow the experimenter's instructions to shock another participant when they were farther away from the person getting shocked.  On the other hand, if people were in the same room as the participant being shocked, they were much less likely to follow the experimenter's orders.

If the issue really is distance from the target, then we'll always have this issue in Internet-mediated communication.  In fact, as people spend more and more time communicating with people via the Internet, the problem is only likely to worsen.  Can we ever get away from the trolls? Other than "not feeding them" - and seriously, DON'T FEED THE TROLLS - what can we do to prevent trolling?

Thoughtfully yours,
Sara