With that in mind is a well-timed article in Perspectives on Psychological Science, which asks, "What Constitutes Strong Psychological Science?" The problem he brings up in the article is one many researchers know well: the trade off between doing "sexy" cutting edge research, which may lead to insignificant, or worse yet, incorrect, results, and "safer" established research topics, which lead to more accurate but less surprising results. He proposes a third option that falls somewhere in the middle of the two:
Science is a pluralistic endeavor that should not be forced into the corset of one specific format. If science is to flourish and to achieve progress, there must be room for competing theories, methods, and different conceptions of what science is about. Symbiotic collaboration must be possible between theory-driven and phenomenon-driven research. There is no reason to disqualify or downgrade properly conducted research of any particular type.Basically, we should continue exploring new topics of study, while also conducting research that is theoretically-driven. That is, use established theory and principles to generate hypotheses about more novel phenomena or test old principles/theories in new situations/applications. This gives the research a solid footing, by drawing on prior research about that theory or principle, while also giving room for exploration.
However, for science to grow and to unfold its potential in the future, it is essential to recognize the chances and limitations of distinct types of research and to deal with many challenges in theorizing and logic of science—beyond superficial issues of data analysis. No statistical analysis can be better than the design of a study, and no research design can be better than the rationale of the underlying theory.
The future growth of psychological science calls for a change in the value hierarchy from statistics to research design and theorizing. For research to flourish and to enable strong scientific inferences, in addition to surprising and inspiring discoveries and reputable methods and models, it is essential to take the diagnosticity of empirical hypothesis tests and the a priori likelihood of underlying theories into account.
I agree completely that a lot of research has been conducted without a theoretical grounding. One of my favorite topics, pretrial publicity, has mostly been conducted atheoretically. But when researchers, including, myself have tried to apply a particular theory to understand pretrial publicity effects, the results don't conform to the theory, even though we still see a negative impact of pretrial publicity. This put me in a really uncomfortable position when I tried to publish a meta-analysis of my results; I was told by reviewers that I needed to do more with theory (like include some) and perhaps use the aggregated data to test a particular theory or set of theories. I understood their criticisms, because a theoretical basis is something this topic really needs. At the same time, when you do a meta-analysis, you're at the mercy of what previous researchers did and the type of data they collected, which differs across studies, sometimes dramatically. This makes it really difficult to test a theory with all (or even part) of your data.
This is the main reason my meta-analysis STILL isn't published, almost 7 years after I finished it.