Sunday, September 24, 2017

Statistics Sunday: What is a Content Validation Study?

I've been a bit behind on blogging this week because we're starting up a content validation study for multiple exams at work. A content validation study is done to ensure the topics on a measure are relevant to the measure subject - basically we want to make sure we have all the content we should have (relevant content) and none of the content we shouldn't have (irrelevant content). For instance, if you were developing the new version of the SAT, you'd want to make sure that everything on the quantitative portion is relevant to the domain (math) and covers all the aspects of math that are important for a person going from high school to college.

For certification and licensing exams, the focus is on public safety. What tasks or knowledge are important for this professional to know in order to protect the public from harm? That helps narrow down the potential content. From there, we have many different approaches to find out what topics are important.

The first potential way is bringing in experts: people who have contributed to the knowledge base in that field, perhaps as an educator or researcher, or someone who has been in a particular field for a very long time. There are many potential ways to get information from them. You could interview them one-on-one, or have a focus group. You could use some of the more formal consensus-building approaches, like a Delphi panel. Or you could bring your experts in at different stages to influence and shape information obtained from another source.

Another potential way is to collect information on how people working in the field spend their time. This approach is often known as job analysis. Once again, there are many ways you can collect that information. You can shadow and observe people as they work, doing a modified version of a time-motion study. You could conduct interviews or focus groups with people working in the field. Or you could field a large survey, asking people to rate how frequently they perform a certain task and/or how essential it is to do that task correctly.

A related approach is to examine written materials, such as job descriptions, to see what sorts of things a person is expected to know or do as part of the job.

Of course, content validation studies are conducted for a variety of measures, not just exams. When I worked on a project in VA to develop a measure of comfort and tolerability of filtering respirator masks, we performed a multi-stage content validation study, using many of the approaches listed above. First, we checked the literature to see what research has been performed on comfort (or rather, discomfort) with these masks. We found that studies had shown people experienced things like sweaty faces and heat buildup, with some extreme outcomes like shortness of breath and claustrophobia. We created a list of everything we found in the literature, and wrote open-ended questions about them. Then, we used these questions to conduct 3 focus groups with healthcare workers who had to wear these masks as part of their jobs - basically anyone who works with patients in airborne isolation.

These results were used to develop a final list of symptoms and reactions people had as a result of wearing these masks, and we started writing questions about them. We brought in more people at different stages of the draft to look at what we had, provide their thoughts on the rating scales we used, and tell us whether we had all the relevant topics covered on the measure (that is, were we missing anything important or did we have topics that didn't fit?). All of these approaches help to maximize validity of the measure.

This is an aspect of psychometrics that isn't really discussed - the importance of having a good background in qualitative research methods. Conducting focus groups and interviews well takes experience, and being able to take narratives from multiple people and distill them down to key topics can be challenging. A knowledge of survey design is also important. There's certainly a methodological side to being a psychometrician - something I hope to blog about more in the future!

2 comments:

  1. Very important point. Howard Wainer said, pithily, "A test developed by content experts alone measures nothing well. A test developed by psychometricians alone measures nothing, well." The qualitative research done in focus groups, think aloud studies, and the like are utterly essential to developing good content that can be modeled and then trying to sort through the anomalies that are inevitably found. I'm always amazed at how many people seem to go right to the modeling though.

    ReplyDelete
  2. Thanks! I like that quote! And it's nice to hear from others who understand the importance of qualitative research. I've experienced a lot of snobbery - not as much in the more applied fields, but still some even there. I've seen so many researchers or surveyors latch onto a new topic and attempt to study it quantitatively before there's any kind of knowledge base that would allow to construct good closed-ended questions and measures. Qualitative research can make so many contributions in its own right, but it's especially useful at helping to zero in on topics that can be studied quantitatively. I wish more graduate schools would give students the opportunity to be exposed to these methods.

    ReplyDelete