Monday, May 29, 2017

Sara's Week in Psychological Science: Conference Wrap-Up

I'm back from Boston and collecting my thoughts from the conference. I had a great time and made lots of great connections. While I didn't have a lot of visitors to my poster, I had some wonderful conversations with a few visitors and other presenters - quality over quantity. I'm also making some plans for the near future. Stay tuned: there are some big changes on the horizon I'll be announcing, starting later in the week.

In the meantime, I'm revisiting notes from talks I attended. One in particular presented a flip side of a concept I've blogged about a lot - the Dunning-Kruger effect. To refresh your memory, the Dunning-Kruger effect describes the relationship between actual and perceived competence. People who are actually low or high in competence tend to rate themselves more highly on perceived competence than people with a moderate level of competence - and this effect has been observed for a variety of skills.

The reason for this effect has to do with knowing what competence looks like. You need a certain level of knowledge about a subject to know what true competence looks like. People with moderate competence know quite a bit but also know how much more there is to learn. But people with low competence don't know enough to understand what competence looks like - in short, they don't know what they don't know. (In fact, you can read a summary of some of this research here, which I co-authored several years ago with my dissertation director, Linda Heath, and a fellow graduate student, Adam DeHoek.)

The way to counteract this effect is to show people what competence looks like. But one presentation at APS this year showed a negative side effect of this tactic. Todd Rogers from the Harvard Kennedy School presented data collected through Massively Open Online Courses (MOOCs - such as those you'd find listed on Coursera). These courses have high enrollment but also high attrition - for instance, it isn't unusual for a course to have an enrollment of 15,000 but only 5,000 who complete all assignments.

Even with 66.7% attrition, that's a lot of grading. So MOOCs deal with high enrollment using peer assessment. Students are randomly assigned to grade other students' assignments. In his study, Dr. Rogers looked at the effect of quality of randomly assigned essays on course completion.

He found that when students received high quality essays, they were significantly less likely to finish, than if they received low quality essays. A follow-up experiment, where participants were randomly assigned to receive multiple high quality or low quality essays, confirmed these results. When people are exposed to competence, their self-appraisals go down, mitigating the Dunning-Kruger effect. But now they're also less likely to try. Depending on the skill, this might be the desired outcome, but not always. Usually when you try to get people to make more accurate self-assessments, you aren't trying to make them give up entirely, but perhaps accept that they have more to learn.

So how can you counteract the Dunning-Kruger effect without also potentially reducing a person's self-efficacy? I'll need to revisit this question sometime, but share any thoughts you might have in the comments below!

In the meantime, I leave you with a photo I took while sightseeing in Boston:

No comments:

Post a Comment