When Evidence Based Practice Goes Wrong

by | Sep 21, 2024 | 0 comments

Balancing Evidence and Experience: Lessons from the STAR*D Scandal

The Tightrope of Objective and Subjective in Psychotherapy

For decades, psychotherapy has walked a tightrope between the worlds of scientific research and clinical practice. On one side, a growing emphasis on evidence-based models promises therapeutic approaches grounded in objective data. On the other, skilled clinicians rely on hard-earned wisdom, theoretical savvy, and a nuanced reading of each client’s unique needs. Binding these worlds together, we find the raw data of real patient outcomes – stories of recovery and struggle that rarely fit neatly into the categories of a research study.

Maintaining this balance is no easy task, especially in an era that exalts quantitative evidence as the gold standard of care. Many well-meaning therapists, in an earnest attempt to be responsible practitioners, cleave to the research literature like scripture. But as any seasoned clinician knows, real therapy is a far messier affair than a randomized controlled trial. Humans are not lab rats, and what works on average in a study population may utterly fail an individual client.

The Specter of Bad Science

Even more concerning, the very research we rely on to guide our work can be flawed, biased, or outright fraudulent. The ghosts of the replication crisis haunt the halls of academia, casting doubt on the reliability of even the most prestigious publications (Open Science Collaboration, 2015). At best, this leads to wasted time and resources chasing dead ends. At worst, it causes direct harm to patients, saddling them with ineffective or even counterproductive treatment.

A stark reminder of these risks recently emerged in the form of the STAR*D scandal. This influential study, published in 2006, appeared to show that nearly 70% of depressed patients would achieve remission if they simply cycled through different antidepressants (Rush et al., 2006). Guided by these findings, countless psychiatrists and therapists dutifully switched their non-responsive clients from one drug to the next, chasing an elusive promise of relief.

Unraveling the STAR*D Scandal

But as a shocking re-analysis has revealed, the STAR*D results were dramatically inflated through a combination of scientific misconduct and questionable research practices (Pigott et al., 2023). Patients who didn’t even meet the study’s criteria for depression were included in the analysis. The definition of treatment response was shifted midway through to juice the numbers. Participants who dropped out were excluded from the final tally, painting a misleadingly rosy picture (Levine, 2024).

When these flaws are corrected, the actual remission rate in STAR*D plummets to a mere 35% – no better than what would be expected from a placebo (Pigott et al., 2023). For 17 years, this single study guided the treatment of millions, despite being essentially a work of fiction. Patients endured round after round of medication trials, suffering debilitating side effects, on the false promise of relief just around the corner (Levine, 2024).

Follow the Money

How could such a house of cards have stood unchallenged for so long? Part of the answer lies in the cozy relationship between academic psychiatry and the pharmaceutical industry. The lead STAR*D investigators had extensive financial ties to the manufacturers of the very drugs they were testing (Rush et al., 2006). These conflicts of interest, subtly or not so subtly, shape what questions get asked, what outcomes are measured, and what results see the light of day (Angell, 2009).

But there’s a deeper issue at play. As a field, we have come to fetishize a narrow conception of evidence that excludes the lived experience of clinicians and patients. We dismiss practitioner wisdom as “anecdotal” and patient reports as “subjective.” In our zeal to be rigorous and scientific, we end up disconnected from the ground truth of the consulting room (Shedler, 2018).

Ignoring the Wisdom of the Trenches

Nowhere is this disconnect more glaring than in the treatment of trauma and dissociation. For years, trauma therapists and their patients have known that the simplistic, manualized approaches touted by many academics often fail to address the complex realities of healing from profound psychological wounds (van der Kolk, 2014). They have seen firsthand how the neat, linear treatment protocols enshrined in the research literature can fall short in the messy, non-linear process of recovery.

Yet their voices have been largely ignored or dismissed by the psychiatric establishment. The highest paid researchers and thought leaders, ensconced in their ivory towers, have continued to champion treatment models that look good on paper but break down in practice. They have built elaborate theoretical edifices, complete with arcane diagnostic criteria and one-size-fits-all interventions, that bear little resemblance to the lived realities of trauma survivors.

In the process, they have gaslit a generation of clinicians and patients, telling them that their own perceptions and experiences can’t be trusted if they don’t align with the “evidence base”. They have created a culture of self-doubt and deference to authority that stifles innovation and critical thinking. And they have perpetuated a cycle of ineffective, even harmful treatment that leaves far too many trauma survivors feeling blamed, invalidated, and re-traumatized by the very systems meant to help them.

Redefining Evidence-Based Practice

This isn’t to say we should discard the evidence base entirely, retreating into some kind of therapeutic wild west where anything goes. Research is essential for guiding our work and protecting patients from quackery and incompetence (Lilienfeld et al., 2014). But it cannot be the only voice at the table.

We must expand our definition of evidence to include the rich, qualitative data that emerges in actual clinical practice (Levitt et al., 2018). We need to listen to therapists about what they’re actually seeing work, and not work, with real patients. We need to take seriously the outcomes and side effects that clients report, even (perhaps especially) when they diverge from the official study results (Longhofer & Floersch, 2014).

This is particularly crucial for complex, hard-to-study conditions like trauma, dissociation, and personality disorders. Randomized controlled trials, for all their strengths, are ill-equipped to capture the intricate, deeply personal work of healing from these wounds (van der Kolk, 2014). Therapists in the trenches with these populations are generating valuable practice-based evidence every day – we ignore it at our peril.

The Decline of CBT: A Cautionary Tale

The risks of a narrow, dogmatic approach to evidence are not limited to the world of pharmaceuticals. Even psychological treatments can fall prey to a kind of calcification when they become overly standardized and manualized.

Take cognitive-behavioral therapy (CBT), long considered the gold standard of evidence-based practice. Meta-analyses have shown that its effectiveness has declined by half in recent years (Johnsen & Friborg, 2015). Some have blamed this on the “dilution effect” of an expanding pool of less competent practitioners. Others point to the waning of the placebo effect as CBT has lost its sheen of novelty.

But a more parsimonious explanation may be that the early success of CBT relied on a diversity of therapeutic techniques that have since been sidelined in the rush to standardization. The pioneers of CBT were often trained in a rich array of approaches, from psychodynamic therapy to Gestalt to humanistic traditions. They drew upon this broad repertoire to flexibly adapt CBT to the needs of each patient.

As the field has become more focused on adhering to strict CBT protocols, that flexibility and creativity has been lost. We’ve ended up with a kind of paint-by-numbers version of CBT, a rote recitation of techniques that may work in the aggregate but leave many individual patients behind. In our quest for a standardized, “pure” form of CBT, we’ve forgotten that its original power came from its integration with other ways of working.

Holding Research in Perspective

None of this lets fraudulent or sloppy researchers off the hook. The STAR*D investigators must be held to account, along with the academic and publishing institutions that enabled them (Levine, 2024). But as the dust settles on this scandal, let it be a reminder to hold research findings in perspective and balance.

The most meaningful and robust evidence base will emerge from the meeting of multiple ways of knowing – quantitative and qualitative, theoretical and applied, nomothetic and idiographic (Safran et al., 2011). It will be grounded in clinical realities, not just rarefied abstractions. It will make space for the wisdom of seasoned practitioners and the lived experiences of patients. And it will approach research with a spirit of humility and skepticism, recognizing that even the most rigorous studies are only approximations of a complex reality.

As therapists, our ultimate allegiance must be to our patients, not to any particular theoretical model or body of research. That means being willing to question received wisdom, to listen to the stories that don’t fit neatly into our preconceived categories, and to remain ever open to being surprised by the messy, unpredictable process of human change. It’s a daunting challenge, but one that lies at the very heart of our work. Let us have the courage to meet it head on.

References

Angell, M. (2009). Drug companies & doctors: A story of corruption. The New York Review of Books, 56(1), 8-12.

Johnsen, T. J., & Friborg, O. (2015). The effects of cognitive behavioral therapy as an anti-depressive treatment is falling: A meta-analysis. Psychological Bulletin, 141(4), 747-768. https://doi.org/10.1037/bul0000015

Levine, B. (2024, January 17). Scientific Misconduct and Fraud: Psychiatry’s Final Nail in the Antidepressant Coffin. CounterPunch. https://www.counterpunch.org/2024/01/17/scientific-misconduct-and-fraud-psychiatrys-final-nail-in-the-antidepressant-coffin/

Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 26-46. https://doi.org/10.1037/amp0000151

Lilienfeld, S. O., Ritschel, L. A., Lynn, S. J., Cautin, R. L., & Latzman, R. D. (2014). Why ineffective psychotherapies appear to work: A taxonomy of causes of spurious therapeutic effectiveness. Perspectives on Psychological Science, 9(4), 355-387. https://doi.org/10.1177/1745691614535216

Longhofer, J., & Floersch, J. (2014). Values in a science of social work: Values-informed research and research-informed values. Research on Social Work Practice, 24(5), 527-534. https://doi.org/10.1177/1049731513511119

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251). https://doi.org/10.1126/science.aac4716

Pigott, E., Leventhal, A., Nierenberg, A., Andrew, J., Sink, K., Nierenberg, A., Jacobson, D., & Baltuch, D. (2023). Star*d: Antidepressant Efficacy Deflated by Scientific Misconduct. BMJ Evidence-Based Medicine. https://ebm.bmj.com/content/early/2023/07/12/bmjebm-2023-112306

Rush, A. J., Trivedi, M. H., Wisniewski, S. R., Nierenberg, A. A., Stewart, J. W., Warden, D., Niederehe, G., Thase, M. E., Lavori, P. W., Lebowitz, B. D., McGrath, P. J., Rosenbaum, J. F., Sackeim, H. A., Kupfer, D. J., Luther, J., & Fava, M. (2006). Acute and longer-term outcomes in depressed outpatients requiring one or several treatment steps: A STAR*D report. American Journal of Psychiatry, 163(11), 1905-1917. https://doi.org/10.1176/ajp.2006.163.11.1905

Safran, J. D., Abreu, I., Ogilvie, J., & DeMaria, A. (2011). Does psychotherapy research influence the clinical practice of researcher-clinicians? Clinical Psychology: Science and Practice, 18(4), 357-371. https://doi.org/10.1111/j.1468-2850.2011.01267.x

Shedler, J. (2018). Where is the evidence for “evidence-based” therapy? Psychiatric Clinics of North America, 41(2), 319-329. https://doi.org/10.1016/j.psc.2018.02.001

van der Kolk, B. A. (2014). The body keeps the score: Brain, mind, and body in the healing of trauma. Penguin Books.

Psychotherapy’s Feuding Founders

Psychotherapy’s Feuding Founders

Ego, Ideology, and the Battle for the Soul of the Profession From the outside, psychotherapy often appears to be a staid and sober enterprise – a science of the mind dedicated to the rational amelioration of human suffering. But a closer examination of the field's...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *