Introduced in the March 1982 issue of The Atlantic magazine, “Broken Windows” was the brainchild of George L. Kelling, a criminologist, and James Q. Wilson, a political scientist. At its heart was the idea that physical and social disorder – a broken window, a littered sidewalk, public drunkenness – are inextricably linked to criminal behavior. By focusing on repairing the windows, cleaning up the streets, and dissuading crude behavior, Kelling and Wilson suggested, police departments can help to forestall more serious crimes from ever taking shape.The implications of this approach are more arrests for misdemeanor crimes, as well as efforts to prevent minor crimes, like "stop and frisk." I'm sure you can imagine how this approach to policing can be (and already has been) abused. And a recent report from the New York City Office of the Inspector General suggests broken windows policing doesn't actually do what it's supposed to do:
The analysis examined five years of arrest and crime data in a hunt for some statistical relationship between quality-of-life arrests — those made for such offenses as public urination, disorderly conduct, and drinking alcohol in public — and a reduction in felony crimes. The results were undeniable: “OIG-NYPD’s analysis has found no empirical evidence demonstrating a clear and direct link between an increase in summons and misdemeanor arrest activity and a related drop in felony crime,” the report stated.The NYPD countered with a study of their own. In fact, many studies have been performed on this topic, finding mixed results:
“That’s part of the problem with ‘broken windows’ literature,” said Dan O’Brien, an assistant professor at Northeastern University’s School of Criminology and Criminal Justice. “There’s just so many studies, that people will point to whatever study supports their argument.”This situation is a great example of the nature of research in any topic. Choose a topic and there are likely multiple studies about it, and there will likely be contradictions (sometimes major) between these studies. That doesn't mean all of them are wrong - in fact, they may all be right to some degree. Whenever you conduct research on a topic, you have to make choices. Choices about the methods to use - for instance, use real-world data or simulate the real-world in the lab and do an experiment; what to measure and how to measure it; how to analyze the resulting data; and so on. It's impossible to do every possible iteration of study design. So even if you conduct the study in the most rigorous way possible, you might find very different results from another study also done in a highly rigorous way. That doesn't mean they are both wrong: the differences might be because of the choices you made.
By the same token, some approaches are better than others, though every approach has trade-offs. Which means some studies are going to be better (more valid) than others. This is why, when you evaluate a particular study, you have to evaluate it based on the methods, to recognize the various trade-offs and factors that limit how far you can apply the findings. If you evaluate a study based on the results, it ends up being based on opinion - what you think is true, regardless of whether it is actually true.
As I always informed my research methods students, learning how to evaluate studies is incredibly useful even for non-researchers, because you're otherwise at the mercy of others for what to believe. And that can have real-world implications, just like broken windows policing.
No comments:
Post a Comment