Wednesday, April 13, 2016

K is for Justin Kruger

Justin Kruger is a social psychologist who currently serves as a professor of marketing at New York University's Stern School of Business. His research interests include use of heuristics (which I'll be blogging about in the near future) and egocentrism in perspective taking. But some of the most interesting research he contributed was completed while he was a graduate student at Cornell University, working with David Dunning. This research is on overconfidence in self-assessment, and the finding from the research has become known as the Dunning-Kruger effect.

Specifically, the Dunning-Kruger effect has to do with rating one's own competence. The rating could be done for any skill or ability; in the original article on the topic (which you can read here), they assessed humor, logical reasoning, and English grammar. In addition to taking an objective assessment, participants were also asked to rate their own ability and how well they thought they did on the test.

They found that people at the lowest level of actual ability overestimated their ability, rating themselves as highly competent on the subject being assessed. In addition, people with the highest level of ability tended to underestimate their ability. In fact, when they charted objective ability and self-assessed ability together, it looked something like this:


The red line shows how people actually performed (on the objective test). This shows a clear, linear trend; people with low ability did poorly on the test, and people with high ability did well on the test. (Note: This is to be expected, because the "actual test score" line uses the same data that was used to assign people to ability groups. The actual test line is kind of redundant, but is included here to really drive home what the Dunning-Kruger effect looks like.)

Now look at the blue line, which shows how well people thought they did. The low performers thought they did well - better than people who are slightly below average and slightly above average. The high performers also thought they did well, though not as well as they actually did. The most accurate assessment came from the slightly above average group, but even they slightly underestimated their ability.

Why does this occur? The issue comes down to what we call "metacognition" - essentially thinking about or being aware of how we think. As I blogged about before with social comparison, people are motivated to evaluate themselves but they need something to which they can compare. In the case of self-assessment of one's ability, the comparison is what we think good performance looks like. People at the lowest level of ability lack the metacognitive skills to know what good performance looks like in order to accurately assess themselves. To put it simply: they don't know how much they don't know.

When you first encounter a subject you know nothing about, you have no idea what to expect and probably have no idea how much there is to know. So you underestimate how much you need to learn to become an expert, meaning you overestimate how close you are to expert level. (Do you know how many times people have told me that, because they've taken introductory psychology, they too are an expert in the subject?) As you learn more, you acquire knowledge and skills, but you also get a more accurate picture of how much more there is to know, so your assessment of your abilities goes down. That is, when you have moderate competence in a topic, you know a lot, but you also know how much you don't know.

Once again, this finding has some important real-world applications. The first that springs to mind is in job interviews, where people are constantly asked to assess their own abilities, but are rarely (at least in my field) given any objective test to demonstrate those abilities. This is perhaps one of many reasons why job interviews are generally not valid predictors of actual job performance - but that's a post for another day.

No comments:

Post a Comment