The simplest study has two variables: the independent variable (X), which we manipulate, and the dependent variable (Y), the outcome we measure. The simplest independent variable has two levels: experimental (the intervention) and control (where we don't change anything). We compare these two groups to see if our experimental group is different.
To use a recent example, if we want to study the Von Restorff effect, we would have one group with a simple list (control) and another group with one unusual item added to the list (experimental). We would then measure memory for the list.
But we don't have to stop at just one independent variable. We could have as many as we would like. So let's say we introduced another variable from a previous post: social facilitation. Half of our participants will complete the task alone (no social facilitation) while the other half will compete with other participants (social facilitation).
When we want to measure the effects of two independent variables, we need to have all possible combinations of those two variables. This is a factorial (also known as a crossed) design. We figure out how many groups we need by multiplying the number of groups for the first variable (X) by the number of groups for the second variable (Z).
For the example I just gave, this would be a 2 X 2 design. The X would be pronounced as "by." This gives us a total of 4 groups: unique item-no social facilitation, simple items-no social facilitation, unique items-social facilitation, and simple items-social facilitation. Only by having all possible combinations can we separate out the effects of both variables.
Not only does this design require us to have more people and often more study materials, it requires us to have more hypotheses: predictions about how the study turns out.
We would have one hypothesis for the first independent variable: lists that contain unique items will be more memorable than simple lists. And another for our second independent variable: participants who compete against others will remember more list items than people who do not compete against others.
But we would also have a hypothesis about how the two variables interact. Since we expect people with unique lists and social facilitation groups to perform better (remember more items), we might expect people with both unique lists and social facilitation to have the best performance. And on the opposite end of the spectrum, we might expect people who receive simple lists with no social facilitation to perform poorest.
We might also think that unique lists alone (no social facilitation) and social facilitation (no unique items) alone will produce about the same performance. So we would hypothesize that these two groups will be about the same. We would then run our statistical analysis to see if we detect this specific pattern.
The great thing about this design is that we don't have to use it with two manipulated variables. We could have one of our variables be a "person" variable: a characteristic about the person we can't manipulate. For example, one variable could be gender. This changes our design from "experimental" to "quasi-experimental."
For my masters thesis, I studied a concept know as stereotype threat, which occurs when a stereotype about a group affects a group member's performance. I looked at how stereotype threat affects women's math performance. So one of my variables was manipulated (stereotype threat) and the other was gender. This is a common design for examining gender differences.
No comments:
Post a Comment