Donâ€™t Believe that Study: The Pitfalls of Psychology Experiments and Mental Health Journalism
We’ve all heard the exciting headlines about sexy new psychology studies like “Rich People Have Less Compassion, Psychology Research Suggests,” or “Make-Up Sex Is Like Cocaine Addiction, Says Clinical Psychologist,” but how much do you, or should you, really believe it all? Much of it is true, but with issues like subject bias, data inaccuracies, and sensationalized reporting, every study should be met with a healthy dose of skepticism.
Problems in Volunteer Population
In many cases, mental health research is university-based, and university-based researchers tend to use university students as subjects. Whether they’re earning course credit or a little extra pocket change, college students are typically the most likely subjects studied in psychology experiments. They solve the problem of finding volunteers for research, but create a different one: college students do not represent the entirety of human behavior. They don’t even represent the entirety of their age group, and they’re at a point in their lives when the anxiety and stress of college may have a huge impact on mental health.
Responses on emotional states for freshmen were at an all-time low in recent years, with more students reporting feeling overwhelmed and planning to seek personal counseling. More students are reporting severe psychological problems as well, including thoughts of suicide. Experts are concerned that colleges are not doing enough to support the mental health of students, who are clearly a population that is at risk for psychological issues. This is not a population that mental health research should be based on, but due to convenience, cost, and even tradition, so many times, it is.
Other popular volunteer
sources include the Internet, classifieds, and hospitals, which may offer more diversity, but still present their own bias problems. Subjects who sign up to be a part of paid studies typically do so because they are low-income and in need of the money; others who are in hospitals may already be disproportionately more sick, physically or mentally, than the general population. A bias still exists in these sources, and any bias in subjects can skew the results of mental health research.
For truly accurate results in mental health research, psychologists must use what’s called a “randomized sample.” Randomized samples use an unbiased selection of people to accurately reflect the population at large. It’s what the FDA uses for drug trials, and large corporations who conduct studies are typically required to use them as well. A biased sample of subjects for a drug trial could have disastrous results. But psychologists don’t have to live up to this same standard. Does this mean you can’t trust a study that’s based only on the responses of college students or select parts of the population? Not necessarily, but you should recognize and consider this bias wherever it exists.
Difficulties in Replication and Inaccuracies
Although there are certain standards that are typically followed in research studies, each individual researcher has their own way conducting research, and these differences in style can make it difficult to replicate studies. Replication is important to any type of study, mental health or otherwise, so that it can be proven that the results were not a fluke, or worse, a result of dishonesty in research. Inability to replicate doesn’t necessarily mean a mental health study is false, but at the same time, it doesn’t mean that the study should be unquestionably trusted either.
Although many readers of mental health journalism will typically take research results at their word, inaccurate findings aren’t as rare as you might think. Research dishonesty exists: in 2011, an investigative committee revealed that Diederik Stapel, a widely published Dutch psychologist, made a habit of falsifying data and even making up entire experiments. Stapel is not alone in his dishonesty. A 2011 survey of more than 2,000 American psychologists reveals that most (70%) of the respondents admitted to cutting corners in data reporting.
Dishonesty isn’t the biggest problem in inaccurate data, however. In the 2011 American psychologist survey, only 1% admitted to outright falsifying data. The real problem, experts insist, is in statistical sloppiness. University of Amsterdam researchers randomly sampled 281 high-end psychological papers for statistical errors. About half of them had some sort of error, and interestingly, about 15% of them had an error that changed the findings of the report, typically negating the original hypothesis.
An obvious solution to the problems of dishonesty and self-serving inaccuracies in mental health research is accountability. Dr. Stapel would not have been able to publish decades of false research had he opened his raw data up to others, but he did not. A willingness to share data is believed to be an indicator of quality. A 2011 research study indicated that a reluctance to share data is associated with weaker evidence, meaning it’s very likely that those who do not open their books to others are not confident in their findings, or they may even have something to hide.
All of these issues present a real confidence problem in psychology, a field that already has trouble getting respect from outsiders. The results of studies in psychology journals have been questioned for decades, undermining the work of accountable researchers with responsible data collection practices. Respected researchers in the field have set out to investigate reproducibility in psychology through The Reproducibility Project. In this project, samplings of studies from three prominent psychology journals are selected for replication, presumably proving whether or not the original study was true. This group of researchers is checking the work of other respected researchers, a move that may make some nervous, but ultimately, contributes to the value of psychology with more accurate findings.
A Publication Barrier
Studies that are surprising, sexy, or subversive will typically gather the most attention from publishers and readers. In Nature, experimental psychologist Chris Chambers shares that there’s “an emphasis on ‘slightly freak-show-ish’ results,” those that really stand out as novel or interesting. Eric-Jan Wagenmakers, a mathematical psychologist says, “There are some experiments that everyone knows don’t replicate, but this knowledge doesn’t get into the literature.” For journals, it’s simply not interesting to share that the results of an exciting study weren’t actually true, and the replication studies go unpublished.”
There’s an entire website devoted to the publication of these replication studies: PsychFileDrawer.org. PsychFileDrawer allows users to upload the results of replication attempts in psychology. The site’s creators did this so that, instead of relying on social networks or water coolers where this information is typically shared, researchers and other interested parties have a central place where they can read the reports on scientific literature in psychology.
The Impact of Research Distortion
PsychFileDrawer, along with the efforts of The Reproducibility Project, seems to be a smart way to reveal and weed out the pitfalls of mental health experiments and reporting. But they don’t do much to stop the unending stream of sensationalized and possibly inaccurate research that’s shared on a regular basis. Exciting new studies published in journals turn into headlines on CNN and the local news. The media does a great job of distorting studies with exaggeration, causation, and sexy, eye-catching psychology that may not present the whole truth.
Journalists are also often guilty of sharing sensationalized headlines that tell a different story than what experts actually have to say, buried deep down in the article where most readers will never go.
We are all consumers of mental health journalism, whether you’re a casual reader of the latest studies or a serious psychology researcher. Do you know how to evaluate mental health reports, picking out the trustworthy from the incredible? It’s important to consider the source, relying only on studies from reputable publications, and maintain a healthy degree of skepticism when evaluating the results of a research study. You should also keep in mind that in mental health journalism, it’s all too easy for headlines to become overblown or misleading.
Can psychology studies be trusted? Most likely, but be careful to evaluate their accuracy and trustworthiness. The field of psychology is overwhelmingly populated with respectable researchers who, although they may make the occasional mistake in data collection or reporting, still intend to present
studies that are accurate and reproducible. Psychologists who admit to falsifying reports represent only 1% of the entire profession. But that falsifying 1% could have written the study that changes the way you look at your life, or influences an important part of your psychology career.
Even in reports that are not outright falsified, there are still problems in mental health reporting that can interfere with accuracy and credibility, creating a problem for those who read and study them, and even act upon them. Don’t be so naive as to think that you can believe everything you read in the news, or in scientific journals. Before you put your trust in a study, do your own due diligence: search for replications of the research, consider the source, and find out more about how the study was conducted.