As I mentioned in a previous blog post, I've accepted a new position outside of the VA. Earlier this week, I started my new job as a Psychometrician at Houghton Mifflin Harcourt. And I realized that I'm exactly where I've always wanted to be.
As a teenager, I became fascinated with tests - most of my exposure was through those stupid personality tests from magazines, which I didn't find nearly as stupid as I probably should have, but I also had the opportunity to take a cognitive ability (what was once known as an intelligence) test in 4th grade. I thought the concept of solving puzzles to find out more about oneself was amazing.
Flash forward to college, when a fascination with statistics I encountered in journal articles during General Psychology resulted in a shift from theatre to psychology as my major. But I was even more excited when I discovered that one of the classes I could take was Testing & Measurement, which was all about the various psychological tests for diagnosis (such as the Minnesota Multiphasic Personality Inventory, or MMPI), examining development (such as the House-Tree-Person task, which literally involves having a child draw a house, a tree, and a person), or for determining cognitive ability (such as the Wechsler Intelligence Scale for Children, or WISC, which is what I learned during that class I took in 4th grade, after I recognized one of the tests involving recreating symbols with blocks):
I briefly considered going into clinical psychology, as a person who would administer these psychological tests, but social psychology drew me more strongly. But I was in luck when one of the classes I could take in grad school was structural equation modeling, which is frequently used in a test development paradigm called classical test theory - an approach about creating a single test that measures a particular concept, and obtaining data to demonstrate it measures what it is supposed to (validity, through comparisons with gold standards, where possible, or similar measures), consistently (reliability, by comparing items from one half of the test to the other - called split-half reliability, or over time - called test-retest reliability). I didn't really do a lot with test development in grad school, unfortunately, but I was able to in my time at VA.
And the last piece in the puzzle was a fellowship I received while at VA, where I was able to receive additional training in a newer area of test development, item response theory, and a related but mathematically different approach, Rasch. In these paradigms, responses to individual items are believed to be determined by two things: the difficulty level of the item (or in the case of personality tests, the amount of a trait a person needs to response in a certain way) and ability level of the person (or amount of trait he/she possesses). This approach has many advantages over what I learned in grad school, the two biggest of which are: 1) you can now more accurately estimate ability level, based on how people respond to the items, and 2) it is no longer necessary to give every person all items or even the exact same set of items. This opens the door for things like computer adaptive testing.
All the tools were there. And what am I doing now? I'll be working on batteries of cognitive ability tests, very similar to the WISC I took as a child. I get to do exactly what fascinated me as a child - working with tests - combined with a love of statistics I discovered in college to really dig into the tests, and make sure they work. I'll even be involved in writing the manuals that go along with the tests! It's a dream 30-some years in the making.
No comments:
Post a Comment