On Arrogance and Excellence, On White Coats and White Knights
In 1961, Stanley Milgram conducted an experiment that would fundamentally challenge our understanding of human obedience and moral authority. Participants were instructed by a man in a white coat, an apparent authority figure, to administer what they believed were increasingly harmful electric shocks to another person. The instructions escalated from causing minor discomfort to what participants believed would end the person’s life. Most participants completed the entire sequence. The experiment was ostensibly designed to test whether something like Nazi Germany could happen anywhere, and that became the primary way it was publicized. However, the findings revealed far more complex and disturbing patterns about human nature and institutional authority.
The original study (Milgram, 1963) found that 65% of participants continued to the maximum 450-volt level, despite hearing screams of pain and pleas to stop. But later replications and variations revealed additional troubling findings. When participants were asked to administer shocks to animals rather than humans who were begging them to stop, most people refused to harm the animal while they would harm the human. When the experiment was replicated in Germany, which was supposedly the point of proving that Nazis could happen anywhere, more participants were willing to complete the lethal sequence than in other countries (Mantell, 1971).
Subsequent replications uncovered even more nuanced findings. Burger’s 2009 partial replication found that 70% of participants continued past the 150-volt point where the learner first protests, nearly identical to Milgram’s original findings despite decades of supposed ethical progress. The proximity of the victim mattered significantly: when the learner was in the same room, compliance dropped to 40%, and when participants had to physically place the learner’s hand on a shock plate, only 30% complied (Milgram, 1974). Women showed the same rates of obedience as men, contradicting assumptions about gender differences in aggression. Perhaps most disturbingly, participants who refused to continue often still believed the experimenter had the right to require them to continue, they simply chose to disobey. The Hofling hospital experiment (Hofling et al., 1966) extended these findings to real-world medical settings, where 21 of 22 nurses administered what they believed was a dangerous overdose of medication when ordered by an unknown doctor over the phone.
The Scientific Method and the Soft Sciences
There is an odd phenomenon occurring in psychology right now where soft sciences somehow seem exempt from the scientific method as long as they hold up signifiers and symbols of how the scientific method is supposed to work. It has been critiqued as practitioners following capitalism-flavored science instead of natural science, but it is happening nonetheless. One of the ways this manifests is that when a patient or provider tells you that something works, researchers say it cannot work.
The scientific method consists of these essential steps: observation, hypothesis formation, prediction, experimentation, analysis, and replication. If somebody who is credentialed and intelligent says that something works, how can you, as somebody who actually follows science, dismiss their observation? Are you actually following these steps? How much does it cost to do a comprehensive study on how some of these novel approaches work, particularly when they are complicated and not reducible to a simple number? When it comes to psychology and soft sciences, how often is our culture actually conducting such research?
The question of how one invents something new in psychology and how that new thing is taken seriously reveals the profit motive embedded in academia, healthcare, and psychology. The process of getting anyone to research something genuinely new requires navigating complex institutional barriers. As Porter (1995) argues in Trust in Numbers, mechanical objectivity emerges not from scientific superiority but from institutional weakness and the need to manage distrust in complex bureaucracies. When you have to control for placebo effects that can reach 30 to 40 percent effectiveness, and when you’re dealing with processes built on other processes in something as complex as the brain, the challenge of proving that something works becomes a barrier to innovation rather than a pathway to truth.
The Evolution from CBT Practitioner to Depth Explorer
When I first started practicing as a psychotherapist, I was deeply insecure that I wouldn’t know enough, so I studied every model of psychotherapy that had ever been written, to my knowledge. This sounds like an exaggeration, but I had four years to do this while working as an outreach social worker, spending 90% of my time in my car, so I listened to audiobooks on literally everything. The soft science, the weird science, the French science. I thought I was a CBT social worker because that was what we were always told in graduate school was the gold standard that everyone had to start with. This was twenty years ago, but that was what was taught at the time.
One of the trends at the time was EMDR, and The Body Keeps the Score was just coming out. I thought EMDR sounded hokey, but I wanted to try it and thought it might give me a market advantage if I got the training. To date, EMDR has done nothing for me as a patient. I didn’t see it work for a whole lot of patients either, and I started trying to figure out which type of patients it worked for. Many of them had dissociative experiences relating to trauma or emotion. I saw it work miracles for some of these people and started wondering why. At the time, most researchers thought anyone doing EMDR was either stupid, part of a cult, or that clinical practitioners didn’t know how to read research like informed researchers did.
In my experience, the EMDR clinicians didn’t do themselves any favors. It was often EMDR that had healed them, and to be fair, EMDR can work miracles for people who have been stuck in CBT, DBT, IOPs, and ACT therapy for years without progress. Then suddenly something hits them and they realize this emotional part they need to integrate is their job, not something to talk about, not something for somebody else to tell them how to do, but something they can do themselves. I’ve seen it work miraculously, but it doesn’t work for most people. The clinicians usually were somebody who got better through EMDR and became true believers, so they weren’t noticing that 70% of their patients were leaving feeling like it was hokey. The problem is that while EMDR works for about 30% of people in my experience, doing nothing for the 70% who still need help, those 70% still need help and we need to recognize that EMDR isn’t providing it.
Researchers continued to find EMDR slightly less effective or slightly more effective than placebo and thus clinically useless. Clinicians found it was this miraculous technique they were chasing, sometimes coming off as cultish. The researchers thought the clinicians were stupid because they didn’t know how to read research, and the clinicians thought the researchers were stupid because they weren’t paying attention to what happens when you deal with seriously sick people in the room, not calling yourself a clinician because you work three or four days a week at a student counseling center seeing students who broke up with their boyfriend and then providing them with CBT and psychoeducation.
The Discovery of Something New
When I was doing EMDR, I started noticing that people would stop in certain spots and then the pupil would sometimes wibble or go in and out. Sometimes the pupil itself would avoid one of the places on the EMDR tracking line when a patient was trying to follow my fingers. These responses weren’t conscious; they happened before a patient could be aware of it, and people don’t have micro control over their pupil dilation and the way the eye moves in the room when it’s trying to move really fast. They were replicable, and I started to stop in the spot where I saw a pupil either moving or jumping around. When I stopped in these spots, I saw people go into deep and profound states of processing, and then patients started often requesting that I do that instead of the normal EMDR protocol. When I did, they experienced rapid resolution and relief.
The EMDR providers were no help because I requested more and more consultations and kept hearing the same thing. They would tell me I needed to do the 15 movements or 25 or whatever protocol because Francine Shapiro had said it and that was the protocol. But what if you see a DID patient who’s gone into an alter? What if you see a person who has completely decompensated? These protocols weren’t flexible, and the EMDR clinicians were tied to them. The trainers and advanced specialists couldn’t really think outside of the box.
Eventually I spoke to a colleague who told me this sounded a lot like brain spotting. I didn’t know what that was, so I bought the book and read it, but it didn’t contain anything I was seeing. So I paid $400 an hour to talk to David Grand, who is the founder, because I had been a clinician for three months at this point and was seeing things I couldn’t explain that no one was able to help me with. Dr. David Grand told me he was a pupil of Francine Shapiro, founder of EMDR, and that he had invented brain spotting when he saw the same thing I did. He encouraged me to get the training so I would understand what it felt like. He told me I was doing it but didn’t know what it felt like, and that was a missing part. To date, ten years later, this has always been a foundation of my approach.
I integrate many different types of psychotherapy into my practice, but getting the training is the smallest part. You need to read all the books of the founders and understand their thought process, and most importantly, you need to do the actual therapy yourself. It’s not the “awe” of learning, it’s the “ah ha” of experiencing. When I got my comparative religion degree, we used to talk about the “awe” being understanding a religion and the “ah ha” being having it actually speak to you. These rituals and experiences are something people do because they mean something. Understanding them is half of the technique; feeling it is the other half.
Brain spotting lets you target things more surgically than EMDR because it allows you to stop on one spot and let a client go all the way through one part of memory instead of activating all the little bits of memory and trying to reconsolidate them in the room. But it took Francine Shapiro inventing EMDR for other people to build on her work, even though research was always going to find an approach like that mostly ineffective. Now, because of massive meta-analyses and the Veterans Administration, we know that EMDR is effective and it’s been broadly accepted as an effective psychotherapeutic practice. The problem is that it’s already out of date. Brain spotting is better. Emotional Transformation Therapy works better. By the time research got around to validating something invented in 1987 and it trickled through colleges so people shouldn’t recoil in horror when somebody was doing something new and weird, that thing was already not the most useful thing we could be doing with patients.
Theodore Porter and the Crisis of Trust in Numbers
This is a bad way for a system to work. You can explain why the system works that way and pretend it’s cautious to move slowly, but that’s not really true. I’m not advocating for people to deviate from science here; I’m advocating for people to adhere to science more closely. What we did with EMDR and what we’ve done with pretty much every modality post-CBT was not scientific. It was a waste of time. My time as a clinician, your time as a patient.
Theodore Porter’s work in Trust in Numbers reveals why this happens. Porter argues that quantification is fundamentally a “technology of distance.” The language of mathematics is highly structured and rule-bound, and reliance on numbers and quantitative manipulation minimizes the need for intimate knowledge and personal trust. Objectivity derives its impetus from cultural contexts where elites are weak, where private negotiation is suspect, and where trust is in short supply.
Porter demonstrates that what we consider “objective” methods in psychology emerged from specific historical conditions rather than representing natural or superior ways of understanding human experience. The drive toward mechanical objectivity arose not as natural progression of knowledge but as a response to what he calls “low trust networks,” complex social systems where personal relationships and individual judgment could no longer coordinate social action effectively. In smaller, traditional communities, knowledge and authority operated through high trust networks based on personal relationships, local expertise, and shared cultural understanding. People could rely on knowing their doctor, teacher, or leader personally, trusting their character and competence through direct relationship.
However, as societies became increasingly complex with the rise of large-scale institutions, bureaucracies, and hierarchical organizations, these intimate trust relationships became impossible to maintain. When dealing with strangers across vast institutional networks, when coordinating actions among thousands of people who would never meet, when managing resources across geographic distances, the old systems of personal trust and local authority broke down. This was not because personal judgment was inferior, but because the scale and complexity of modern institutions made personal knowledge logistically impossible.
The Misapplication of Institutional Solutions to Human Experience
Porter shows how “objective” statistical methods and standardized procedures emerged as solutions to this crisis of trust in complex systems. When you cannot know your officials personally, when institutions must coordinate the actions of thousands of strangers, when bureaucracies must make decisions affecting populations they will never meet, mechanical rules and quantitative measures become tools for managing distrust in low trust networks rather than expressions of superior knowledge. Objectivity became a way to coordinate action in low trust environments where personal relationships could not bear the weight of social coordination.
Porter’s critique is essential for understanding emotion because emotions are inherently products of high trust, relational knowing: the intimate connection between experiencing subjectivity and understanding the lived world. When psychology attempts to study emotion through the objective, quantitative methods that emerged from low trust institutional networks like the biomedical model itself, it fundamentally misrepresents what emotions actually are. The statistical mindset developed to manage bureaucratic distrust becomes destructive when applied to the intimate realm of human emotional experience.
Emotions emerge from the relationship between the experiencing subject and their world. They are not objects that can be studied independently of the subjective experience of having them. They require the kind of intimate, relational knowledge that Porter shows was displaced by institutional demands for mechanical objectivity. When therapists treat emotions as variables to be measured rather than as meaningful communications emerging from a lived relationship, we import the alienation of low trust networks into the very heart of human psychological understanding.
The Degradation of Emotional Understanding Through CBT
Porter’s analysis helps explain why the increasing dominance of cognitive-behavioral approaches in psychology has actually degraded rather than enhanced our understanding of emotion and other dynamic processes in the psyche. CBT represents the triumph of low trust network thinking in therapy: the belief that standardized protocols, measurable outcomes, and mechanical procedures can replace the intimate, relational knowledge required for genuine emotional understanding. As discussed earlier, the mass adoption of cognitive and behavioral therapies in isolation has had an objectively measurable negative effect on patient outcomes in therapy. In CBT, emotion is recognized, but then managed, labeled and compartmentalized in terms of behavior. It is not seen as a source of authentic wisdom or an indicator that identity should change or intuition should be reclaimed.
When therapy becomes overly focused on objective measurement, standardized protocols, and symptom reductive metrics, it risks reinforcing the very alienation from one’s own subjective experience that often underlies psychological distress. The cognitive-behavioral emphasis on changing thoughts to change feelings treats emotions as problems to be solved through technical procedures rather than as meaningful communications requiring relational understanding. This approach, while appearing scientific, actually represents what Porter would recognize as the bureaucratic displacement of intimate knowledge by institutional control mechanisms.
The focus on evidence-based protocols and diagnostic standardization reflects exactly the kind of low trust thinking that Porter identifies: the assumption that mechanical rules can substitute for the nuanced, relational understanding that human emotional life actually requires. Rather than developing a deeper capacity to work with the mystery and complexity of human emotional experience, the field of psychotherapy has increasingly retreated into the false safety of quantification and standardization that characterizes low trust institutional networks.
The STAR*D Scandal: A Case Study in Research Failure
For decades, psychotherapy has walked a tightrope between the worlds of scientific research and clinical practice. On one side, a growing emphasis on evidence-based models promises therapeutic approaches grounded in objective data. On the other, skilled clinicians rely on hard-earned wisdom, theoretical savvy, and a nuanced reading of each patient’s unique needs. Binding these worlds together, we find the raw data of real patient outcomes, stories of recovery and struggle that rarely fit neatly into the categories of a quantitative research study.
Many well-meaning therapists, in an earnest attempt to be responsible practitioners, cleave to the research literature like scripture. But as any seasoned clinician knows, real therapy is a far messier affair than a randomized controlled trial. Humans are not lab rats, and what works on average in a study population may utterly fail an individual patient. Even more concerning, the very research we rely on to guide our work can be flawed, biased, or outright fraudulent.
The STARD study provides a stark reminder of these risks. This influential study, published in 2006 (Rush et al., 2006), appeared to show that nearly 70% of depressed patients would achieve remission if they simply cycled through different antidepressants in combination with cognitive-behavioral therapy. Guided by these findings, countless psychiatrists and therapists dutifully switched their non-responsive patients from one drug to the next, chasing an elusive promise of relief. But as a shocking re-analysis has revealed (Pigott et al., 2023), the STARD results were dramatically inflated through a combination of scientific misconduct and questionable research practices.
The forensic re-analysis systematically exposed the extent of these issues, revealing a study built on a foundation of profound methodological flaws. The widely publicized 67% cumulative remission rate was not based on the study’s pre-specified, blinded primary outcome measure, the Hamilton Rating Scale for Depression. Instead, investigators switched to a secondary, unblinded, self-report questionnaire which showed a more favorable result. When the correct primary outcome measure is used and all participants are properly included, the cumulative remission rate is only 35%. This statistical inflation was compounded by other protocol violations, including the exclusion of hundreds of patients who dropped out and the inclusion of over 900 patients who did not meet the study’s minimum depression severity for entry.
Perhaps most damning, the 67% figure refers only to achieving remission at some point during acute treatment and completely obscures the rate of sustained recovery. The re-analysis found that of the original 4,041 patients who entered the trial, only a small fraction achieved lasting positive outcome. When accounting for dropouts and relapses over the one-year follow-up period, a mere 108 patients, just 2.7% of the initial cohort, achieved remission and stayed well without relapsing. For seventeen years, the false promise of the STAR*D findings guided the treatment of millions, subjecting patients to numerous medication trials based on fundamentally unsound research.
The Problem of Conflicts of Interest
How could such a house of cards have stood unchallenged for so long? Part of the answer lies in the cozy relationship between academic psychiatry and the pharmaceutical industry. The lead STAR*D investigators had extensive financial ties to the manufacturers of the very drugs they were testing. These conflicts of interest, subtly or not so subtly, shape what questions get asked, what outcomes are measured, and what results see the light of day. Research has shown that financial interests can create powerful incentives (Resnik & Shamoo, 2017) to overemphasize or underemphasize research findings, thereby compromising the trustworthiness of the work.
These conflicts of interest were nominally disclosed in the original 2006 publication, but only in the small print at the end of the article. It seems that neither the journal editors, peer reviewers, nor the academic psychiatry establishment saw fit to question whether these commercial entanglements might have influenced the study design, analysis, or interpretation. This lack of scrutiny is especially alarming given how eagerly the field embraced the STAR*D findings. The study was hailed as a landmark, a definitive guide to real-world antidepressant prescribing. Its inflated remission rates were parroted uncritically in countless media reports, continuing medical education courses, and clinical practice guidelines.
The Divided Profession
Academic and private practice clinical psychology have historically been, and largely still are, heading in two widely different directions. That is not good for this profession. It was strange to me that researchers and academics were so hostile to anything that was not “evidence-based,” even when it was neuroscientifically empirically plausible and related to other highly evidence-based practices conceptually.
Psychotherapy must take concepts that are empirically valid from historical and current therapeutic models. Therapists must combine these concepts and techniques and innovate on them. That is what therapists have always done, with research as an important part but not as the whole of the process of scientific progress. This is not meant to be an overly academic publication. I think overly defined and scientific writing in the psychotherapy space is the death of many previously interesting concepts. The corporatization of both academia and healthcare has made our conceptions of what therapy is worse. This has cut us off from the history of psychotherapy as a discipline and the philosophy and anthropology that undergird it.
This present moment is the only point in the profession of psychotherapy where you can find clinicians who seem to believe the absurd proposition that new models should stop being made because they have not been validated yet. How could they be validated before they were created? This is terrifying and unprecedented. We as a profession have mis-educated therapists to believe that only behavior and objectively measurable cognition are the real variables of change.
The Complexity of Research Interpretation
The debate about whether CBT’s effectiveness is declining illustrates these problems perfectly. A 2015 meta-analysis by Johnsen and Friborg found that CBT’s effectiveness appeared to be declining over time. There was speculation about why this might be. Some professionals suggested this was due to the placebo effect of a new therapy decreasing now that CBT is no longer new. Others speculated that when more therapists use a model that has grown, there will be a greater level of overall incompetence. However, it’s important to note that this finding has been contested.
A subsequent re-analysis by Cristea and colleagues in 2017 identified methodological concerns and concluded that the apparent decline may be a spurious finding. This debate highlights the complexity of interpreting psychotherapy research, particularly when comparing studies across different time periods, populations, and implementations. Importantly, these comparisons typically focus on CBT, DBT, and psychodynamic therapies. They don’t examine modern somatic approaches like brainspotting, ETT, or parts-based therapies like Internal Family Systems, which limits the scope of these analyses.
The controversy reveals a profound irony. The push to label formulaic and manualized approaches as “evidence-based” was driven by a researcher-centric view that favored the methodological purity of RCTs, a preference not always shared by clinicians or patients. As Shedler (2018) argues, the demand to exclude non-RCT data inadvertently proves the point made by critics of the EBP movement: by elevating RCTs as the only legitimate form of evidence, the field risks ignoring a wealth of clinical data and creating a definition of “evidence” that does not reflect the complex, comorbid reality of actual clinical practice.
The Real Reason for Any Decline
In my mind, what is more likely than placebo effects or incompetence is that the early effectiveness of CBT relied on all of the other skills clinicians of the 1960s and 1970s were trained in. As these clinicians trained in psychodynamic, relational therapy, depth psychology, and Adlerian techniques left the profession, then pure CBT was left to stand on its own merits. This would explain a completely linear decline in effectiveness found in the 2015 Johnsen and Friborg meta-analysis. Older clinicians retire each year and take the skills that are no longer taught in colleges with them. Any decline in efficacy we are seeing could result from clinicians doing CBT who have been taught only cognitive and behavioral models in school. This is my hypothesis based on observing the field over time. The decline in broader psychotherapeutic training is well-documented; by the mid-2010s, over half of U.S. psychiatrists no longer practiced any psychotherapy at all (Tadmon & Olfson, 2022), a stark contrast to previous generations.
At this point in my career as a therapist, I found that the majority of the highest-earning and most influential therapists with long waitlists were not doing what most researchers would consider solid evidence-based practice. Even more odd, many of the most popular models like brainspotting or somatic therapy were under-researched and ignored by mainstream psychology academics. This was despite the fact that these models had been around for years and were clinically popular with patients.
As Shedler (2018) argues, “evidence-based therapy” has become a de facto code word for manualized therapy, most often brief CBT, with assertions of scientific superiority that are not supported by research findings. The overreliance on RCTs creates a significant blind spot, as these trials prioritize internal validity by using narrow selection criteria that exclude complex, comorbid patients, the very patients who are the norm in real-world clinical practice. This methodology favors symptom reduction over deeper, more meaningful change and marginalizes approaches that do not lend themselves to a rigid, manualized design.
Reconnecting with High Trust Knowing
Psychology needs to reclaim its capacity to work with the dynamic, unpredictable, and fundamentally relational processes that constitute human emotional life. This means developing therapeutic approaches that can engage with the full spectrum of human experience without immediately applying the objectifying procedures that Porter shows emerged from institutional distrust rather than superior understanding.
The goal of therapy is not imposing the mechanical procedures of low trust networks, but helping individuals reconnect with their own inner guidance and healing sources through the kind of intimate, relational knowledge that complex institutions cannot provide. This process requires profound trust in patients’ own intuitive healing capacities, trust that is systematically undermined by the low trust methods that have increasingly dominated psychological practice.
When we accept the processes of emotion on their own terms and engage with them directly, we recognize that emotion reveals itself as a fundamentally narrative and malleable process. The brain’s relationship to emotion is capable of updating and transforming itself when approached with appropriate tools and conscious awareness. By recognizing how stored somatic experiences shape our emotional lives, we gain insight into the deeper narratives and patterns that guide our engagement with the world, patterns that often operate beneath the threshold of conscious awareness yet profoundly influence our responses to present circumstances.
The Biomedical Model and Emotional Externalities
The biomedical model operates on a tacit assumption that when we cluster symptoms, we are identifying similar processes. For certain conditions, this holds. But just as neurology moved from the structural study of discrete regions to the networked study of their communication, psychiatry must move from diagnoses as symptom clusters to diagnoses as process clusters. We can look at qEEG brain maps and see that ADHD is not one thing but several processes that can take place resulting in the same symptoms and responding to the same medication. That does not mean they are one diagnosis. Even if the DSM makes them one, that isn’t a diagnosis; you’re still making an approximation of a process, and you have to be aware that what you’re describing is a process that is alive and moving in an individual, not one cold, dead diagnostic criteria. If you can’t see that, then you shouldn’t see patients as a clinician.
As Deacon (2013) documents, the biomedical model of mental disorder has dominated psychiatric thinking for decades, yet paradoxically this period has been characterized by a broad lack of clinical innovation and poor mental health outcomes. When we treat emotions as externalities in the biomedical model, we are making reality an externality to the practice of psychotherapy. This is what people like James Hillman were trying to tell us about our emotional landscape. When we feel something is wrong with our profession, when we sense that something essential has been lost, that is not just subjective dissatisfaction. That is our felt sense telling us that something important is being ignored, that a prediction error is not being weighted appropriately, that we need to update our models of what psychotherapy should be.
Thomas Szasz (1960) argued that psychiatry’s biomedical model falsely equates problems in living with medical diseases. He wrote that calling a person “mentally ill” does not identify a biological cause but rather describes behavior that society finds troubling or undesirable. For Szasz, this confusion turns descriptions into explanations, a logical error that obscures the real psychological, moral, or social roots of distress. He insisted that mental illness is a metaphor, not a disease entity, and that treating it as such leads to the medicalization of human experience and the loss of personal responsibility.
The Need for Process-Based Understanding
The genetic and environmental realities that generate particular symptoms may produce what looks identical on a checklist while the underlying processes may have nothing to do with each other. Two people presenting with the same depressive symptoms may have completely different precision-weighting disturbances, completely different failures of hierarchical integration, completely different patterns of prediction error that their systems have learned to suppress rather than process. One person’s depression might involve subcortical prediction errors that never reach conscious awareness. Others might involve rigid high-level priors that refuse to update despite mounting bodily evidence of mismatch. The symptom presentation looks the same. The process requiring intervention is entirely different.
This means that people diagnosed with the same disorder are not receiving the help they need because we are not actually diagnosing what is wrong with them. We are cataloging their observable distress and assuming that similar presentations indicate similar dysfunctions. The profession cannot afford this assumption anymore. Our clients certainly cannot afford it. We need diagnostic frameworks that describe blocked hierarchical processing, failed prediction error minimization, disconnection between bodily signals and conscious models. We need language that captures processes, not just presentations. Until we make that shift, we are treating diagnoses rather than people, and the outcomes reflect that failure.
The Path Forward
We need to trust each other again to be able to collaborate, to share observations, to offend each other occasionally with challenging interpretations, to critically analyze what we see in therapy even when that analysis is weird or uncomfortable or doesn’t fit neatly into approved diagnostic categories. This is what a living profession looks like. A profession that is alive is messy, argumentative, creative, willing to be wrong, excited about new ideas, capable of self-correction. A profession that is risking stagnation is careful, defensive, protocol-driven, afraid of liability, more concerned with not making mistakes than with making discoveries.
Our profession risks stagnation. We are losing our capacity for genuine clinical insight. We are training therapists who can follow treatment manuals but who cannot think deeply about what consciousness is or how change happens. We are producing research that measures outcomes but rarely asks fundamental questions about mechanisms or meaning. We are creating generations of clinicians who are competent but diminishingly uninterested in the deeper mysteries of human awareness that drew most of us to this work in the first place.
Randomized controlled trials and objective research are completely fine if you want to see if a certain type of antibiotic kills one strain of bacteria or if a certain type of chemotherapy makes a tumor get smaller. When it comes to understanding processes in the deep brain, it does very very little. We have to be able to listen to experiences, to speculate about mechanisms of action. If we can’t do that, we can’t do psychology. The people who I make this argument to often explain to me how the system works. I know how it works. What I’m saying is that it’s stupid and bad, and the reason you are defending it is because of the sunk cost fallacy in psychology. It shouldn’t work this way. The realists want to be like “well it does” and “I don’t have a career unless I jump through hoops for my entire life and work as an adjunct,” and if you want to do that, go ahead and do that, but stop emailing me. Stop telling me that I’m too idealistic for saying that we should listen to self-evidencing things being self-evident and that part of the scientific method is that process, because you have a brain and you cannot bring a hypothesis into a test until you’re able to make one.
Reading research is not simple, but it’s the easiest part of psychology. If I want to get on PsycINFO right now and search for studies and see what the general consensus is about whether a certain modality of psychotherapy is effective for a certain group of people, I can do that in under an hour. I can do that as a social worker, and it’s not hard. If you think that’s the hard part, then you’re wrong. The hard part is being able to figure out what those findings mean and apply them to actually change people’s lives, figuring out where they work and where they don’t, not based on research that by the time it gets to you is 15 years old, but based on the mechanisms of action that are part of the narrative of the story that research is telling. If you can’t see the narrative, what you’re doing is just moving beans around, moving beads on an abacus. It’s not ever going to connect to reality. You’ve mistaken the mirror for the object. You’ve mistaken the map for the territory. We have trained a generation of clinicians that can’t think, they can’t understand how somebody else’s experience might be different from their own or why the DSM-5 drawing checkboxes around a group of symptoms might not actually mean that a diagnosis is this thing that exists in reality, just a description of it.
We need processes in psychotherapy again. We need trust. Porter’s analysis shows us that the intimate, relational, meaning-saturated domain of psychotherapy has been colonized by institutional technologies designed for managing distrust in contexts where personal judgment is suspect. The biomedical model provides the ideological justification for this colonization by recasting psychological suffering as a technical problem amenable to standardized solutions. Yet as therapeutic alliance research demonstrates (Martin et al., 2000), the quality of the human relationship between therapist and patient consistently predicts outcomes across treatment modalities, with meta-analyses reporting effect sizes around r = .28, accounting for approximately 8% of outcome variance.
The solution is not to abandon empirical rigor but to develop research methods appropriate to the phenomenon: process studies examining mechanisms of change, practice-based evidence from naturalistic settings, cohort studies of complex presentations, and systematic case studies capturing therapeutic trajectories. These approaches accept rather than deny the role of clinical judgment while still demanding evidence and accountability. Most fundamentally, the field must recover recognition that therapeutic effectiveness emerges from high-trust relationships between autonomous professionals and the individuals who seek their help. Standardized protocols, evidence-based guidelines, and quality metrics have legitimate roles as supports for clinical judgment, not replacements for it. Psychotherapy should aspire to build communities of practitioners with rigorous training, strong ethical foundations, and collegial accountability who integrate research evidence with clinical wisdom and patient preferences. This is not a retreat from science but a recognition that the science of human psychological change requires different tools than the science of chemical reactions. The technology of distance fails when healing requires precisely what mechanical objectivity eliminates: presence, attunement, and the irreplaceable knowledge that emerges from one human being truly encountering another.
References
American Psychological Association. (2006). Evidence-based practice in psychology. American Psychologist, 61(4), 271-285. https://doi.org/10.1037/0003-066X.61.4.271
Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist, 64(1), 1-11. https://doi.org/10.1037/a0016234
Cristea, I. A., Stefan, S., Karyotaki, E., David, D., Hollon, S. D., & Cuijpers, P. (2017). The effects of cognitive behavioral therapy are not systematically falling: A revision of Johnsen and Friborg (2015). Psychological Bulletin, 143(3), 326-340. https://doi.org/10.1037/bul0000062
Deacon, B. J. (2013). The biomedical model of mental disorder: A critical analysis of its validity, utility, and effects on psychotherapy research. Clinical Psychology Review, 33(7), 846-861. https://doi.org/10.1016/j.cpr.2012.09.007
Hofling, C. K., Brotzman, E., Dalrymple, S., Graves, N., & Pierce, C. M. (1966). An experimental study in nurse-physician relationships. The Journal of Nervous and Mental Disease, 143(2), 171-180. https://doi.org/10.1097/00005053-196608000-00008
Johnsen, T. J., & Friborg, O. (2015). The effects of cognitive behavioral therapy as an anti-depressive treatment is falling: A meta-analysis. Psychological Bulletin, 141(4), 747-768. https://doi.org/10.1037/bul0000015
Mantell, D. M. (1971). The potential for violence in Germany. Journal of Social Issues, 27(4), 101-112. https://doi.org/10.1037/h0092849
Martin, D. J., Garske, J. P., & Davis, M. K. (2000). Relation of the therapeutic alliance with outcome and other variables: A meta-analytic review. Journal of Consulting and Clinical Psychology, 68(3), 438-450. https://doi.org/10.1037/a0022186
Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67(4), 371-378. https://doi.org/10.1037/h0040525
Milgram, S. (1974). Obedience to authority: An experimental view. Harper & Row.
Pigott, H. E., Kim, T., Xu, C., Kirsch, I., & Amsterdam, J. D. (2023). What are the treatment remission, response and extent of improvement rates after up to four trials of antidepressant therapies in real-world depressed patients? A reanalysis of the STAR*D study’s patient-level data with fidelity to the original research protocol. BMJ Open, 13(7), e063095. https://doi.org/10.1136/bmjopen-2022-063095
Porter, T. M. (1995). Trust in numbers: The pursuit of objectivity in science and public life. Princeton University Press. https://press.princeton.edu/books/paperback/9780691208411/trust-in-numbers
Resnik, D. B., & Shamoo, A. E. (2017). Conflict of interest and scientific objectivity. Accountability in Research, 24(6), 359-371. https://doi.org/10.1016/j.cpr.2002.02.001
Rush, A. J., Trivedi, M. H., Wisniewski, S. R., Nierenberg, A. A., Stewart, J. W., Warden, D., Niederehe, G., Thase, M. E., Lavori, P. W., Lebowitz, B. D., McGrath, P. J., Rosenbaum, J. F., Sackeim, H. A., Kupfer, D. J., Luther, J., & Fava, M. (2006). Acute and longer-term outcomes in depressed outpatients requiring one or several treatment steps: A STAR*D report. American Journal of Psychiatry, 163(11), 1905-1917. https://doi.org/10.1176/ajp.2006.163.11.1905
Shedler, J. (2018). Where is the evidence for “evidence-based” therapy? Psychiatric Clinics of North America, 41(2), 319-329. https://doi.org/10.1016/j.psc.2018.02.001
Tadmon, D., & Olfson, M. (2022). Trends in outpatient psychotherapy provision by U.S. psychiatrists: 1996-2019. The American Journal of Psychiatry, 179(2), 110-121. https://doi.org/10.1176/appi.ps.202100086
Westen, D., Novotny, C. M., & Thompson-Brenner, H. (2004). The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin, 130(4), 631-663. https://doi.org/10.1037/0033-2909.130.4.631



























0 Comments