Sunday, March 11, 2018

Statistics Sunday: Factor Analysis and Psychometrics

Last Sunday, I provided an introduction to factor analysis - an approach to measurement modeling. As you can imagine, because of its use in measurement, factor analysis is a tool of many psychometricians, which can be used in the course of measurement development to help establish validity of the measure.

Factor analysis is frequently considered an approach of classical test theory, though, rather than of an item response theory approach, so it isn't the primary approach to checking dimensionality for many of measures I've worked on. In classical test theory, the focus is on the complete measure as a unit. Only by including all of the items can one achieve the reliability and validity established in the development and standardization research on the measure. Factor analysis, while providing information on each item's individual contribution to the measurement of a concept, also places the focus on the complete set of items - and these items only - as the way to tap into the factor or latent variable.

Item response theory approaches (and for the moment, I'm including Rasch - which I know will ruffle some feathers), on the other hand, consider the relationship of individual items to the concept being measured, and each item has corresponding statistics concerning how (and how well) it measures the concept. The items a person receives can be considered a sample of all potential items on that concept. (And in fact, the items in the item bank can be considered a sample of all potential items, written and unwritten.) Yes, there are stipulations that affect the gestalt of overall measure a person completes - guidelines about how many items they should respond to in order to adequately measure the concept, and so on - but there are few stipulations about the specific items themselves. It is because of item response theory approaches that we are able to have computer adaptive testing. If we adopted the classical test theory approach of a measure as a specific and unchanging combination of items, we would not be able to justify an approach where different people receive different items.

Though factor analysis represents a classical test theory approach, it is still an important one - particularly confirmatory factor analysis, which encourages measurement developers to specify a priori where a particular item belongs. For this reason, it isn't unusual to see psychometricians who use Rasch or Item Response Theory to include CFA in their analysis. In fact, in some psychometrics consulting work I did some years back, I was asked to do just that. But this approach only makes sense when working with fixed form measures - it would be difficult to conduct CFA on item banks used in adaptive testing, since these contain potentially thousands of items, and no single individual would ever be able to complete all of these items.

The Rasch program I predominantly use, Winsteps, uses a data-driven measurement approach called principal components analysis (which I'll blog about sometime soon!). The purpose of a measurement model in this context is simply to ensure the items being analyze don't violate a key assumption of Rasch, which is that items measure a single concept (i.e., that they are unidimensional items). But people can, and often do, ignore these results completely, particularly if they have other evidence supporting the validity and unidimensionality of the concepts included on the measure, such as from a content validation study. In fact, Rasch itself is a measurement model, and so anyone engaging in this kind of analysis is already dealing with latent variables, even if they ignore the principal components analysis results. When conducting Rasch, many other statistics are provided that could help to weed out bad items that don't fit the model.

Look for more posts on this topic (and related topics) in the future!


No comments:

Post a Comment