Theodore M. Porter and the Critique of Quantification:

by | Nov 29, 2024 | 0 comments

Implications Theodore Porter’s Thinking in Psychotherapy and Mental Health

Who is Theodore Porter?

In his seminal work “Trust in Numbers: The Pursuit of Objectivity in Science and Public Life,” historian of science Theodore Porter offers a compelling analysis of the rise and cultural authority of quantitative methods in modern society. Porter challenges the prevailing assumption that the power and prestige of numbers derive solely from their success in the natural sciences. Instead, he argues that to fully understand the ubiquity of quantification, we must examine its ascendancy in the social realms of business, government, and public policy.

Porter’s central thesis is that quantitative objectivity emerged not as an inherent feature of scientific progress, but as a “technology of distance” – a strategy for communicating across expanding social networks whose members could no longer rely on personal trust and reputation alone. The authority of numbers, he contends, is deeply entwined with the social contexts in which quantification is deployed, particularly when expert judgment is challenged and credibility is in doubt. Through a wide-ranging exploration of fields such as accounting, insurance, cost-benefit analysis, and engineering, Porter reveals how the ideal of mechanical objectivity often serves as a bulwark against accusations of arbitrariness or bias when decision-makers face external political pressures and a breakdown of trust.

The Facade of Neutrality

While Porter acknowledges the genuine achievements of quantitative methods, he cautions against the temptation to view them as a panacea for the messy realities of social and political life. The very ideal of a “view from nowhere,” purged of individual discretion and judgment, can easily become a smokescreen for the subtle manipulations of entrenched power. Under the guise of impartial, evidence-based reasoning, bureaucratic hierarchies and corporate interests can shape the epistemic assumptions, methodological conventions, and discursive constraints that govern the production of quantitative knowledge.

This illusion of neutrality is perhaps most apparent in the realm of public policy, where the language of numbers is routinely invoked to justify controversial decisions and foreclosure of alternatives. Porter points to the rise of cost-benefit analysis as a prime example of how the narrow logic of economic quantification can steamroll ethical and ideological differences in the name of an “optimal” solution. By translating complex trade-offs between incommensurable values into a common metric of dollars and cents, policy-makers can lend an air of objective validity to what are ultimately political judgments about the distribution of risks and rewards across society.

Similarly, the proliferation of quantitative benchmarks and performance metrics in domains such as education and healthcare can serve to reinforce status quo power relations while disavowing the role of human agency and responsibility. The seductive appeal of “letting the numbers speak for themselves” can discourage critical interrogation of the value-laden assumptions built into evaluative rubrics and data collection processes. In this way, the pursuit of standardization and algorithmic decision-making can end up marginalizing forms of knowledge and experience that resist easy quantification, such as narrative, affect, and embodied wisdom.

Medicine: Evidence-Based or Industry-Driven?

The field of medicine provides a cautionary case study in the limits and pitfalls of quantitative objectivity. In recent decades, the ideal of evidence-based medicine (EBM) has gained increasing traction as a corrective to the traditional reliance on individual clinical expertise. Proponents of EBM argue that medical decision-making should be guided by systematic reviews of randomized controlled trials and meta-analyses rather than anecdotal experience or expert opinion. This shift towards quantitative empiricism was partly a response to growing public skepticism towards medical authority in the wake of scandals over industry influence and conflicts of interest.

However, as Porter’s analysis suggests, the rhetoric of EBM can also serve to mask the persistence of bias and the distortions of market forces in shaping medical knowledge and practice. The STAR*D study, a massive clinical trial designed to evaluate the effectiveness of antidepressant medications, illustrates the ways in which the presumed objectivity of quantitative evidence can be undermined by methodological choices and reporting practices. Despite its rigorous statistical methodology, the study has been criticized for its reliance on industry funding, selective publication of favorable results, and failure to adequately control for placebo effects. The aura of scientific validity conferred by the study’s quantitative framework has been invoked to justify the widespread prescription of antidepressants, even as questions remain about their efficacy and safety, particularly for mild to moderate depression.

On a broader level, the EBM paradigm risks neglecting crucial contextual factors that shape health outcomes, such as patient preferences, social determinants, and the therapeutic alliance. The drive to eliminate variations in care through rigid adherence to standardized guidelines can undermine the flexibility and judgment needed to tailor treatments to individual needs. Moreover, the emphasis on quantifiable outcomes may marginalize forms of care that resist easy measurement, such as empathy, narrative understanding, and holistic consideration of patient well-being. In this way, the uncritical pursuit of quantitative objectivity in medicine can ironically lead to a reductionist and dehumanizing view of the healing process.

Politics: The Quantitative Rhetoric of the Center

In the political sphere, the ideal of quantitative objectivity has increasingly been mobilized to justify a narrow spectrum of centrist policy options while foreclosing more expansive visions of social transformation. In recent years, the Democratic Party in the United States has embraced a technocratic, data-driven approach to governance that purports to transcend ideological differences through pragmatic problem-solving. This quantitative centrism is epitomized by the rise of figures such as Bill Clinton and Barack Obama, who have championed policies such as welfare reform, financial deregulation, and market-based healthcare on the grounds of economic efficiency and evidence-based policymaking.

However, as Porter’s analysis suggests, this fetishization of quantitative expertise can serve to mask the value judgments and power dynamics that shape political choices. By framing social issues in the technical language of cost-benefit analysis and statistical risk assessment, centrist Democrats have often shifted the Overton window to the right, legitimating the erosion of the welfare state and the marketization of public goods. The presumption that the correct policy must always lie somewhere in the middle of two extremes can have the effect of marginalizing more progressive or transformative ideas as unrealistic or utopian.

Conservative and libertarian think tanks have weaponized the rhetoric of quantitative objectivity to provide an epistemic gloss to their preferred policy agendas. Organizations such as the Heritage Foundation, the Cato Institute, and the Manhattan Institute have produced a deluge of studies, reports, and policy briefs that use economic modeling, regression analysis, and other quantitative techniques to argue for tax cuts, deregulation, and the privatization of government services. By clothing their arguments in the garb of value-neutral science, these groups can lend an air of empirical legitimacy to what are ultimately contestable ideological positions.

Americas obsession with evidence based practice have allowed nefarious forces to turn our conception of economics into ideology plus math and our preference for evidence based practice into science flavored capitalism.

Psychology: The Violence of Pure Empiricism

Perhaps the most troubling misapplication of quantitative objectivity can be found in the field of psychology, where the demand for complete empirical verification of mental phenomena can end up doing epistemic violence to the very subject matter it purports to illuminate. As a discipline concerned with the intricacies of human subjectivity and the interpreted nature of personal experience, psychology has long struggled to reconcile its scientific aspirations with the irreducible complexity of the mind. While quantitative methods have yielded important insights in domains such as cognitive neuroscience and behavioral genetics, the drive to operationalize every aspect of mental life into measurable variables can lead to a flattening and fragmenting of the psyche.

The dark side of psychology’s quantitative turn is perhaps most evident in the history of psychological testing and assessment. From the early 20th century onwards, the proliferation of standardized instruments such as IQ tests, personality inventories, and diagnostic questionnaires has often served to reify cultural stereotypes and legitimate the ranking of human worth along a single quantitative dimension. The aura of scientific objectivity conferred by numerical scores and statistical norms can mask the value-laden assumptions and interpretive judgments that are baked into these tools from the start. In this way, the quantitative gaze of psychology can end up pathologizing difference, decontextualizing distress, and reducing the rich tapestry of human experience to a set of measurable deficits and abnormalities.

Moreover, as Porter argues, the ideal of quantitative objectivity as a “view from nowhere” emerged historically as a defensive response to the erosion of personal trust and the growth of impersonal bureaucracy. When this standpoint of detached, suspicious observation is turned reflexively back onto the self, it can lead to a profound alienation from one’s own inner life. The demand for complete third-person verification of every subjective claim can end up invalidating the epistemic authority of first-person experience, feeling, and intuition. By imposing the same norms of standardization and control on the mind that we apply to the natural world, we risk losing touch with the very qualities that make us human – our capacity for meaning-making, imagination, and empathic understanding.

This is not to suggest that psychology should abandon the pursuit of empirical rigor or eschew quantitative methods altogether. Rather, as Porter’s analysis implies, we need to cultivate a more reflexive and pluralistic understanding of what counts as valid psychological knowledge. This means recognizing the cultural and historical specificity of our methodological assumptions, the value-ladenness of our interpretive frameworks, and the ineradicable role of the human subject in the construction of psychological truth. It means acknowledging the epistemic limits of quantification and the importance of other modes of inquiry, such as qualitative interviews, focus groups, and first-person phenomenology. Above all, it means approaching the study of the mind with an attitude of humility, curiosity, and openness to the irreducible otherness of human experience.

Porter’s Ideas Compared to Other Critics of Rationalizism and Empericism

Theodore Porter’s critique of quantification and objectivity has intriguing parallels and contrasts with the ideas of several other prominent thinkers who have examined the impact of technology, media, and bureaucracy on modern society.

Adam Curtis and the Critique of Computer-Based Societal Modeling

British documentary filmmaker Adam Curtis has argued that the increasing use of computers and data analysis in the late 20th century gave rise to a misguided belief that society could be perfectly understood and modeled using mathematical and computational methods. In his 2011 series “All Watched Over by Machines of Loving Grace,” Curtis suggests that this “cybernetic” view of the world, promoted by thinkers like Ayn Rand and Alan Greenspan, led to a misplaced faith in the power of markets and technology to solve social problems.

Porter’s work resonates with Curtis’s critique in its skepticism towards the assumption that quantitative methods can provide a fully objective and comprehensive understanding of complex human realities. Both thinkers highlight the ways in which the appeal of numbers and data can obscure the subjective judgments and political interests that shape their application.

However, while Curtis emphasizes the role of computers and cybernetics in promoting a mechanistic view of society, Porter’s analysis focuses more on the institutional and professional contexts that drive the pursuit of quantification. Porter’s distinction between mechanical and disciplinary objectivity suggests that the rise of numerical methods cannot be attributed solely to technological developments, but also reflects the social and political imperatives of bureaucracies and expert communities.

Jean Baudrillard and the Simulacra

The French philosopher and cultural theorist Jean Baudrillard is known for his concept of “simulacra” – representations that have become detached from reality and taken on a life of their own. In his book “Simulacra and Simulation” (1981), Baudrillard argues that in the postmodern era, signs and images have lost their connection to real-world referents, creating a hyperreal world where simulation is more powerful than reality.

Baudrillard’s ideas have intriguing implications for Porter’s critique of quantification. From a Baudrillardian perspective, the proliferation of numerical indicators and statistical models could be seen as a form of simulacra – abstract representations that have become more “real” than the complex social phenomena they purport to describe. The use of quantitative measures in fields like economics, policy, and mental health could be seen as creating a hyperreal world where decisions are based on simplified numerical proxies rather than direct engagement with human realities.

However, while Baudrillard’s work often emphasizes the seductive power of simulation and the impossibility of accessing the “real,” Porter’s analysis suggests that quantification is always shaped by social and political contexts. Rather than seeing numbers as fully detached from reality, Porter emphasizes the ways in which quantitative methods are embedded in networks of expertise, accountability, and trust.

The Situationists and the Spectacle of Quantification

The Situationist International was a group of radical artists and theorists active in the 1950s and 60s, known for their critique of consumer capitalism and their advocacy of revolutionary social change. One of the key concepts developed by the Situationists was the idea of the “spectacle” – a term used to describe the way in which modern media and advertising create a false, alienated representation of reality that distracts from authentic human experience.

The Situationist critique of the spectacle has some intriguing parallels with Porter’s analysis of quantification. Just as the spectacle reduces human life to a series of commodified images, the proliferation of numerical indicators and statistical models could be seen as creating a kind of “spectacle of objectivity” – a seductive but ultimately alienating representation of social reality.

However, while the Situationists emphasized the need for radical social and political transformation to overcome the spectacle, Porter’s work suggests that the pursuit of quantification is deeply embedded in the structures and practices of modern institutions. Rather than advocating for a complete rejection of numerical methods, Porter’s analysis invites a more nuanced consideration of how quantitative tools can be used in ways that are transparent, accountable, and responsive to human contexts.

Nietzsche and the Rational

One important point of comparison is with the work of Friedrich Nietzsche, whose genealogical approach to the history of ideas shares with Porter a skepticism towards claims of pure objectivity. Nietzsche’s critique of scientific rationality as a form of asceticism and self-denial, driven by a “will to truth” that serves particular interests and values, anticipates Porter’s examination of the moral and political dimensions of quantification. Like Porter, Nietzsche emphasizes the historical and psychological contingency of knowledge practices, revealing the ways in which the pursuit of truth is always entangled with questions of power and desire.

However, Nietzsche’s critique is arguably more radical and far-reaching than Porter’s, calling into question the very value of objectivity and suggesting that all knowledge claims are ultimately expressions of a will to power. While Porter’s analysis is more focused on the specific contexts and practices of quantification, Nietzsche’s genealogical method aims to uncover the deeper moral and metaphysical roots of scientific thinking itself.

Michel Foucault and the Social Objective

Another key thinker whose work intersects with Porter’s is Michel Foucault, particularly in his analyses of the relationship between knowledge and power. Foucault’s concept of the “power/knowledge nexus” emphasizes the ways in which the production of knowledge is always intertwined with networks of power relations, shaping the possibilities for thought and action in a given historical moment. This perspective resonates with Porter’s examination of how quantitative methods have been employed in the service of bureaucratic administration and governance, from public health and education to criminal justice and social welfare.

Like Porter, Foucault is attentive to the role of quantification in the management of populations and the disciplining of individual subjectivities. He shows how statistical norms and standards, presented as objective and neutral, can function as instruments of power, shaping the ways in which people understand and govern themselves. At the same time, Foucault’s work encompasses a broader range of knowledge practices and power relations than Porter’s more specific focus on quantification, and his emphasis on the ontological and political effects of knowledge production differs from Porter’s more epistemological and professional concerns.

Michel Foucalt

Legacy of Porter’s Ideas

Theodore Porter’s critique of quantification offers a valuable perspective for examining the use of numerical methods in psychotherapy and mental health. His work challenges the assumption that quantification is inherently objective and neutral, highlighting the social, political, and institutional factors that shape the application of statistics and standardized procedures.

In the context of psychotherapy, Porter’s ideas invite a critical reflection on the evidence-based practice movement, diagnostic systems, and the therapeutic relationship. While quantitative methods can provide important insights and support accountability, an overreliance on these approaches can also constrain the understanding and treatment of psychological distress.

As the mental health field grapples with the challenges of providing effective, equitable, and humane care, engaging with Porter’s work can inform the development of more nuanced and contextually-sensitive approaches. This may involve balancing the use of standardized interventions with the cultivation of clinical judgment, attending to the social and cultural determinants of mental health, and prioritizing the therapeutic alliance as a key factor in outcomes.

Ultimately, a critical understanding of the role of quantification in psychotherapy can support the delivery of care that is both evidence-based and person-centered, and that honors the complexity and diversity of human experience.

References:

Porter, T. M. (1995). Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton, NJ: Princeton University Press.

Porter, T. M. (1986). The Rise of Statistical Thinking, 1820-1900. Princeton, NJ: Princeton University Press.

Porter, T. M. (2008). Thin description: Surface and depth in science and science studies. Osiris, 27-32.

Eriksen, K., & Kress, V. E. (2008). A developmental, constructivist model for ethical assessment. Journal of Humanistic Counseling, Education and Development, 47(2), 202-216.

Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66(1), 7.

Norcross, J. C., & Wampold, B. E. (2011). Evidence-based therapy relationships: Research conclusions and clinical practices. Psychotherapy, 48(1), 98.

Cosgrove, L., & Wheeler, E. E. (2013). Industry’s colonization of psychiatry: Ethical and practical implications of financial conflicts of interest in the DSM-5. Feminism & Psychology, 23(1), 93-106.

Prilleltensky, I. (2008). The role of power in wellness, oppression, and liberation: The promise of psychopolitical validity. Journal of Community Psychology, 36(2), 116-136.

What is Emotion?

What is Emotion?

Emotion: A Conjunction of Inner and Outer Spheres James Hillman's book Emotion: A Comprehensive Phenomenology of Theories and Models presents a philosophical and psychological exploration of emotions, investigating them not as mere physiological responses but as...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *