Bad Moods

The half-life of the serotonin hypothesis

HANNAH ZEAVIN
 
 

In the early 1950s, as pharmaceutical companies were looking to treat new conditions and generate new markets, iproniazid (a monoamine oxidase inhibitor, or MAOI) made an unexpected debut as the first prescription drug to be prescribed as an antidepressant. Its new application was a fortuitous accident: in 1951, the drug had provided a stunning cure for rapidly failing tuberculosis patients in a sanatorium on Long Island—just not the one intended. Patients reported feeling happier and vitalized—even as their medical conditions persisted. The drug immediately jumped from its experimental use in the sanatorium to asylums nationwide and, over the course of the next decade, came to be prescribed regularly to the patient on the couch in conjunction with its cousin, the tranquilizer (which antidepressants would go on largely to replace). How and why this tuberculosis medicine produced happier states of mind was not yet clear. But that it did was verified, if only in this highly selective population. 

The quest for a pill that might cure depression was now thinkable. Iproniazid was soon followed by the first tricyclic drug imipramine, already extant but originally used as an antihistamine. Imipramine then was deployed experimentally in asylums with schizophrenic patients in sometimes gruesome experiments. This became the first antipsychotic—first in Europe, then in Canada, and later in the United States. In the case of both iproniazid and imipramine, the drugs’ effect on mood was accidental, but opened up a new frontier of psychopharmacological research across the postwar era in a rapidly evolving mental healthcare landscape. At first, these new medications were marketed to those overseeing large, captive patient populations. Asylum care still reigned across the 1950s and into the mid-1960s, until in-patient treat-ment numbers precipitously declined as patients were sent back into their communities. In parallel, cognitive behavioral therapies and new drug research slowly became the dominant model and treatment for mental illness. Psychiatrists slowly began to ignore Freud’s unconscious in favor of these faster, cheaper treatments until the latter replaced the former, nearly wholesale. 

It was Arthur Sackler, of Purdue Pharma, with an ingenious sense of pharmacological advertising and timing, who pioneered direct marketing on the grounds of tranquilizers to psychiatrists, who had, for decades, remained largely skeptical—if not outright hostile—to psychopharmacological interventions. The drugs ramified out from the psychiatric unit and prison infirmary into the general population, just as the patients first prescribed them had. To ameliorate the lack of funding for the promised revolution in community mental health, which never arrived, batch processing patients with prescriptions seemed like the most prudent, cost-effective move. Psychiatrists stopped thinking of “drug therapy” as inducing comatose states and carceral obedience, and embraced the notion that drugs, like talking, progressively made patients better. Drugs didn’t only contain or pacify as long as doses were high. Instead, if taken daily, they cured. Or this was the wish. In 1965, in the midst of this transition, the scientific explanation arrived. This is typical of much medical research; once there was a supposed cure to study, how it cured gradually might be known. 

Reviewing the literature from the past decade of chemical experimentation, J. J. Schildkraut and later William Bunney Jr. and John Davis of the National Institute of Mental Health, argued that the introduction of chemicals changed mood and treated depression. Together, these authors began to understand that perhaps depression was chemical—not merely a question of serotonin but possibly dopamine, norepinephrine, and other neurotransmitters as well. Theirs were the first concrete biological and chemical theories of mood disorders based on the limited success of using MAOIs and tricyclics. Yet this explanation remained the stuff of medical publications, not the popular imagination. It was debated solely in the profession, and in a particular corner of it tasked with psychiatric research. Across the 1970s, pharmaceutical companies continued the hunt for new, more effective compounds in light of this new theory and in response to an American healthcare system and insurance policies that privileged quick treatments over long-term talking cures. Eli Lilly began to synthesize a compound that would act directly on serotonin inhibitors. Fluoxetine, approved by the FDA in December 1987, was a new class of drug—the first selective serotonin reuptake inhibitor (SSRI). It hit pharmaceutical shelves and prescription pads as Prozac just one month later. 

The serotonin hypothesis—a simplified form of the earlier chemical imbalance theory—began to be disseminated to the laity as gospel via a powerful evangelizing trifecta: in medical schools, by pharmaceutical companies, and then directly to patients receiving treatment in the psychiatrist’s office. Patients—largely white women—were greeted there with a new narrative of depression: there was a chemical, in-brain explanation for their low moods. The complex psychological factors that psychoanalytically oriented psychiatrists of the 1950s and ’60s had used were replaced with a simpler, and therefore better, story. Patients were told they had a tangible, material illness rather than ineffable despair, moral failing, or intractable trauma. Many greeted this change happily, although some felt a biological view of what ailed them was alienating, incomplete. Prescriptions rose in a population grateful for their new diagnosis and for its easily accessible treatment, which was reimbursable on most insurance, whereas talk therapy and other treatments might not be. 

*


But perhaps the era of the brain came at the cost of studying the mind: the 1990s were also when new depression treatments stopped being developed. 

On July 17, 1990, George H. W. Bush proclaimed that the 1990s would be “the Decade of the Brain.” Across the 1980s, Congress had been successfully petitioned by the Society for Neuroscience and the National Committee for Research in Neurological and Communicative Disorders, and the National Institute of Mental Health, along with a half dozen other interested scientific parties. For psychiatry, a signal achievement was a number of the “second-generation” SSRIs that joined Prozac— Zoloft, Paxil, and additional antipsychotics. The announcement of “the Decade of the Brain,” circulated at scientific meetings, became the signal granting opportunity for research across much of the scientific agencies that investigate mental health. 

But perhaps the era of the brain came at the cost of studying the mind: the 1990s were also when new depression treatments stopped being developed. Most of the important innovation had occurred across the previous forty years, culminating in Prozac’s rise to prominence. After the development of electroconvulsive therapy in the 1930s, the rise of American psychoanalysis starting in the 1940s, the discovery of the first antidepressants in the 1950s, psychedelic therapy in the 1960s, the uptake of cognitive behavioral therapies in the 1970s, and SSRIs in the 1980s, the dominant forms of depression treatment were, by the mid-1990s, firmly consolidated. As Thomas R. Insel, the director of the NIMH from 2002–2015, reflecting on the success of “the Decade of the Brain” program commented, “there was neither a marked increase in the rate of recovery from mental illness, nor a detectable decrease in suicide or homelessness—each of which is associated with a failure to recover from mental illness.” This was, he argued, because more research dollars had gone to elaborating new SSRIs than to a scientific understanding of depression itself. 

The success of prescribing these drugs—particularly from a monetary point of view—then had a secondary effect: it largely quelled scientific advances on the front of new treatments as well. Steven Hyman, another former director of the NIMH (199–2001), has written, “drug discovery is at a near standstill.” And, what’s more, most of the major pharmaceutical companies shuttered their research programs in the area of antidepressants after the third generation of SSRIs debuted.

Ironically, it was only after our treatment methods for depression more or less stalled that clinical research rapidly began to outpace mainstream comprehension, which also seemed to stand still after the advent of second- and third-generation SSRIs. Nonetheless, a complex set of research on the etiology of depression was being carried out in a variety of fields, which included a well-funded push to find a genomic basis for a variety of mental illnesses. As Jonathan Sadowsky, author of The Empire of Depression: A New History says, in reviewing the literature of the last three decades, only a very few “of the articles that focused on biological mechanisms denied a role for environmental factors; many of them took it as a given.This meant that, although the multiply determined nature of depression was a settled science, the public only heard tell of what was up for new inquiry: its biological determinants. 


The ad doubled as a quick explainer for the serotonin hypothesis without serotonin—or the fact that this was still a debated hypothesis—ever being named. 

A cluster of forces made it so that the American understandings of mental illness transformed in this decade—not only within scientific research, but also in how that research was then represented to and received by the public. The central and intricate debate on the possible role of neurotransmitters (and serotonin) was reductively fed to the public as a matter of a “chemical imbalance,” merely a closed loop: depressed patients should take SSRIs because they had low serotonin. This is nowhere better typified than in the largely black-and-white, lo-fi Zoloft ad that debuted at the 2003 Super Bowl, which represents a sad person as something between an amorphous blob and a pill. Color only exists on the edges of the blob’s world: in birds, in flowers. Inside, however, all is grey. The ad, which named depression as a serious medical condition diagnosable after just two weeks, then declared, “while the cause is unknown, depression may be related to an imbalance of natural chemicals between nerve cells in the brain.” Small particles floated on screen between one receptor, labeled nerve A, and another, nerve B. Zoloft was then shown to “correct” this imbalance. The ad doubled as a quick explainer for the serotonin hypothesis without serotonin—or the fact that this was still a debated hypothesis—ever being named. 

The public wasn’t exactly misled. It was talked down to, given brief sound bites, euphemisms, cartoons, and pat explanations for a motley of competing explanations in a rapidly evolving but longstanding area of research. Discussions at psychiatric conferences and debates in clinical research were occurring at a remove from their transmission to the public. The result was something like a particularly bad children’s game of telephone. The means of this communication were not exactly malicious, but market driven and reductive, designed to meet would-be patients where they were; the language of the serotonin hypothesis was made invitingly concrete and presented as scientific common sense. 

Before the Zoloft ads, the US public was already accustomed to seeing direct marketing for pharmaceuticals. In the late 1990s, FCC direct marketing regulations were still up in the air, and when Prozac advertised, the brand name of the drug was never even mentioned—it didn’t need to be. Prozac was already the most popular SSRI. By not by mentioning the name (although the closing shots always featured the logo of the parent company—Eli Lilly), the company could avoid charges of direct marketing to consumers. Respecting the law by creating this loophole worked, and those taking charge of their own care began to ask for a prescription, by name, when visiting their internists or family care providers. Campaigns destigmatizing mental illness circulated on public transport and television, mixing with advertisements for Prozac and other SSRIs. These normalizing appeals to the American public phrased depression as mechanical, such as the one led by Tipper Gore where Gore called it “running out of gas,” with SSRIs refilling the tank. Pamphlets explaining depression in these vague terms dotted waiting rooms and were then carried by patients into doctors’ appointments where physicians turned the serotonin hypothesis into a natural fact. Awais Aftab, MD, a clinical assistant professor of psychiatry at Case Western Reserve University, summarized: “This narrative was most pronounced in the case of depression, but it wasn’t specific to it; it was a more generalized narrative about mental illness.”

Here correlation does indeed imply causation: the popular uptake of the serotonin hypothesis occurred only in conjunction with the arrival of this new lucrative drug, Prozac. This is what Andrew Scull, the author of Desperate Remedies: Psychiatry’s Turbulent Quest to Cure Mental Illness, calls “evidence-biased” (rather than evidence-based) medicine. This symbiotic relationship between science communication and pharmacological companies, despite seeming like a neat and obvious quid pro quo, was widespread and remains generally uncontested by leading historians of psychiatry. Since the 1990s, form has followed content. Pharmaceutical companies made a tool, and psychiatrists in their private practices and researchers in their institutes and universities narrativized why it worked, even as some of those same researchers were still complicating and isolating aspects of what the wider world now thought of as a “chemical imbalance.” 

According to polls, some 67% percent of the United States believe that depression is a biological disorder and may well phrase that understanding through a watered-down version of the serotonin hypothesis, or what Aftab calls “a caricatured meme.” This may come as no surprise, as it is what was communicated to the public: when patients seek out treatment for depression, they often go to their primary care physicians, who, in high numbers, also both believe and offer this as the explanation for that which ails us. We were told that depression redounded down to brain chemistry so simple it could be depicted in a one-minute advertisement. There has been no level of similar agitation to argue that depression might be something caused by material circumstances, external factors, or childhood environment—even as science has con-cluded that the basis for depression may also lie in, and be responsive to, these experiences. 

As American psychiatry interfaces with patients, clinicians persist in making this into an either-or question: depression is psychosocial or genetic-biological. Most typically, patients encounter the latter model. We have exchanged one story for another, wholesale. Writing just as Prozac was being released, Leon Eisenberg, a psychopharmacologist who worked at Harvard, observed that “We may trade the one-sidedness of the ‘brainless’ psychiatry of the past for that of a ‘mindless’ psychiatry of the future,”one that became our present tense by the final decade of the last century, and the dictate George H. W. Bush issued, naming it concretely in favor of neurology. Sadowsky argues we don’t have to choose between these two positions: “Data show that antidepressants and psychotherapy work best in combination.” This was, after all, how the earliest antidepressive drugs were discovered and then disseminated: after they leapt from the total institution of the asylum to the general population, they initially were offered in combination with talk therapy, as Jonathan Metzl shows, and prescribed by psychoanalytically oriented psychiatrists. 

*

How, where, and why we diagnose and treat depression has been part of something like a psychiatric culture war for the past thirty years. Yet only recently has this war escaped its professional niche and become a matter of public understanding. In summer 2022, the British psychiatrist Joanna Moncrieff was the lead author on “The Serotonin Theory of Depression: A Systematic Umbrella Review of the Evidence,” a meta-study of the widely disseminated serotonin hypothesis of depression—namely, that mood dysregulation occurs because of a chemical imbalance in the brain. 

Moncrieff is a long-standing member of the “critical psychology” movement and had, more recently, come out as an anti-vaxxer, opposing COVID-19 vaccine mandates. Her own crisis of expertise—the leap from a left-coded anti-pharmaceutical stance to one synonymous with being anti-science—was largely ignored in favor of the message of her study. 


Despite not being a good scientific explanation, it still made for a good story, one offered to patients regularly. 

Synthesizing all of the literature from the chemical imbalance wars—inside scientific societies with little to no impact on the patient-facing dissemination of the hypothesis—Moncrieff hoped to kill off the hypothesis once and for all. To do so, her study centered on that crucial disconnect: that although research has outpaced this basic misunderstanding of depression—that depression is caused by a chemical imbalance or abnormality in the brain—psychiatrists still use this story to communicate the necessity of medication to patients. She and her co-authors were calling foul. Met with only a little resistance inside the psychiatric research community, the study was seen in-field as merely a definitive aggregation of extant evidence against a simplistic version of the theory. Instead, psychiatrists’ worries turned to their patients. And indeed, for many users of antidepressants, the meta-study was an unwelcome revelation. 

For historians of psychiatry and psychology as well as many practicing doctors, the Moncrieff study contributed nothing new scientifically, and that was its point. In short, she and her colleagues articulated what people in the profession have long known: the serotonin hypothesis is a weak one, yet still offered as simple truth and gospel by the professionals in charge of our mental healthcare and chemistry. Despite not being a good scientific explanation, it still made for a good story, one offered to patients regularly. 

The study leapt from a niche world of researchers, historians, activists, and care providers into mainstream discourse, first in the UK and then in the United States, exposing a gulf between patient and scientific understandings of depression. Though the report restricts itself to the serotonin hypothesis—or the why of it, rather than the how (Moncrieff attacks the efficacy of antidepressants elsewhere)—public-facing coverage framed it otherwise. As it reached tabloids like the Daily Mail, it was cast as an attack on the biological view of depression, and therefore the use of antidepressants, where brain chemistry is the central culprit in mood disorders. It was this secondary goal—disproving the efficacy of antidepressant drugs—that got the most airtime. 

Some rushed to demand accountability on behalf of the at-least fifteen million Americans who currently have been taking antidepressants for five years or more. Some patients argued that, regardless of evidence for why they were depressed, antidepressants remain crucial to their treatment plans; how something works and that it works are two different questions in medicine and psychiatry. In the wake of the study, the poet Morgan Parker tweeted, “The chemical imbalance myth saved my life, just saying.” Robert Whitaker, author of Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America and founder of the news organization Mad in America, has gone so far as to argue that the psychiatric profession has committed medical fraud and has called for a class-action lawsuit against pharmacological companies and the American Psychiatric Association (which could not be reached for comment). 

A second study by co-authored by Moncrieff, published in December 2022, investigated the notion that psychiatrists were aware the hypothesis was “an urban legend.” The findings show that this is not fully the case—yes, psychiatry textbooks have continued to teach the theory as a theory and not settled science, but it has also been scrupulously treated as just one element of depression elsewhere in the field. Yet the serotonin hypothesis persisted where it matters most: at the crucial interface between doctor and patient. 

For psychiatrists and primary care physicians, who have made the serotonin imbalance the cornerstone of their scientific communication and the guiding rationale behind their treatment plans, and the patients who trusted them, the study may have produced a rupture of trust in both medical professionals and antidepressant medications. “The article was like pouring gasoline on fire,” says Aftab, who continues, “when we consider how the findings were framed and introduced to the public, and how it takes advantage of public misunderstanding of this issue, we can see it almost as a calculated attack on antidepressants and a neurobiological understanding of depression.” 

This misunderstanding, now being contested in the public sphere, where the serotonin hypothesis was watered down and widely disseminated a generation ago, is dearly held, and for good reason: roughly 13% percent of American adults take some form of antidepressant. Literature reviews of the drugs themselves further demonstrate that studies into the efficacy of antidepressants are difficult to manage, with the placebo effect gaining over antidepressants (closing the gap between true and experienced efficacy). Meanwhile, patients have been left to contend with drugs they might feel saved their lives, or that they no longer fully trust, or both—and which, in any case, can be extremely difficult to go off. Moreover, many patients are faced with the stark reality that there may be no alternative forms of care. 

With the serotonin hypothesis of depression sensationally debunked (again), old questions have reopened as to why people suffer, how they understand their suffering, and how they should be helped with that pain. It is as if we have moved forward only to go back in time to that heady era of the discovery of antidepressants, with a significant difference: the public is now aware that we know much less about depression than those who interface with patients have claimed. As Sadowsky told me, “the question of whether environmental factors can cause clinical depression is not, in my view, an open question—I think it is settled science.” Even in terms of biological factors, as Scull suggested, “if we know five percent of what we need to know, it’s four percent more than we knew 50 years ago.” Much like how drug research arrested in the 1990s, Moncrieff’s study unfortunately presented to the public the picture of depression research as similarly suspended, stuck in its neuro- chemical, biological silo. More complex stories, drawing upon environmental, psychical, and political factors—less easy to represent in black- and-white cartoons—must now come to the fore.

In 1977, Charles Rosenberg, a historian of medicine at Harvard, wrote, “Not only did a drug’s activity indicate to both physician and patient the nature of its efficacy (and the physician’s competence) but it provided a prognostic tool, as well; for the patient’s response to a drug could indicate much about his condition.” Though here he refers to nineteenth-century medicine—emetics and bloodletting and the like—the Moncrieff study again raises this simple question: Why and to what extent do people trust the science practiced on them? For Rosenberg, we must understand therapeutics “as part of a system and belief and behavior participated in by both physician and patient alike.” The serotonin hypothesis is no exception. “I suspect the temptation for psychiatrists is,” Scull told me, “you want to offer patients a story.” 

Psychology has long been defined by the kinds of stories it tells, and, as Rachel Aviv has written in her recent book Strangers to Ourselves: Unsettled Minds and the Stories That Make Us, the kinds of stories that, in turn, patients tell about themselves. Not just about who we are and how we hurt, what we want, and how we behave, but how we know what we know about the mind. Freud was a genius of the psychic story, the narrative etiology of a present state, but those stories turned out to require many hours in session from patients in the consulting room and were less and less profitable for psychiatrists. The twinned advents of the serotonin hypothesis and Prozac presented a story that purported to work at every level, at every scale, and with greater speed, while research ground largely to a halt. 

For patients, it was a narrative that untethered depression from broken psyches and troubled pasts and attached it to a complex set of neurocognitive and biological brain factors that the average person experiences as a black box; they trust experts to tell them how their brain and mind function. Psychiatrists, in turn, could make their practices more lucrative in the shift from talk therapy to prescription by reducing face-to-face time with patients, seeing more of them, more briefly, with less frequency. 

None of this means that the drugs aren’t effective, at least some of the time, for some people. Some attribute this to a placebo effect, or trials that have been manipulated by pharmaceutical companies to suit their needs, but Sadowsky says, “I find the idea that it’s only attributable to placebo implausible. You can’t tailor the placebo effect for different people.” Current debates too frequently bracket the lived experience of successful treatment of depression via SSRIs, or the value of marked improvement when symptoms for depression can be ruthless. With the loss of the easily graspable serotonin hypothesis, we now need a better story to escort us on our way to better knowledge about how depression functions, how SSRIs can be best made part of treatment plans, and how to secure hybrid care in the face of an ongoing mental health crisis. 

How true the story is, and what it reflects about (mis)trust in experts, models of the mind, and the efficacy of drugs are now up for grabs. It is hard to predict whether a class-action lawsuit will be brought or not, and how individual doctors have responded in the past eighteen months to the study in their practices, given the vacuum of nonresponse from leading professional organizations. Nonetheless, there are some aspects of the science of depression that have not been reopened by Moncrieff, even as she ignores the changing ground of research in favor of criticizing its dissemination. One signal shift in recent work has been to the widely held understanding that there are multiple determinations of mental health—that indeed the social, political, and environmental factors of a lived life contribute to how we experience that life. For Sadowsky, it is this “decline of reductionisms” within clinical research that should give us the greatest hope. 

But despite advances in these sciences, many of the hopes of these fields have resulted in dead ends. Or as Scull has it, “negative knowledge is nonetheless an advance: it spares us chasing phantoms. But it once again leaves us with not much of clinical utility.” Despite the falling off of significant new drug research, Sadowsky has hope that the return of older techniques—among them psychodynamic therapy, psychedelics, and brain stimulation—may each prove more successful than their early counterparts. For what it’s worth, Sadowsky pointed out, psychedelics —many of them—also work on serotonin receptors in the brain. If we look at all the investigations into depression across the twentieth century—not just SSRIs—and harness them in the twenty-first, we have a bigger range of tools and techniques to treat depression. And that means, with the right attention and care, patients have a greater chance of finding regimes that work for them. 

The problem is that the attention and care required to do more than diagnose with the blanket term “major depression” remain exceedingly difficult to find, purchase, and obtain. Patients may also wish for a more nuanced story and its associated multimodal treatments—one that matches the intricate factors that lead to depression. We do, it turns out, know better. We may not be able to isolate what makes us depressed and treat only that precisely because depression is multiply determined. To make tailored protocols fully available to patients—especially psychotherapy—the same way an SSRI can be picked up at the pharmacy requires a massive revision at every level of our healthcare system, from mental healthcare policy and insurance standards to medical and clinical education and student debt.

Yet when a patient asks about the cause of their depression, the most honest answer a physician might give is “We don’t completely know” or “It’s complicated.”That’s not a story, or at least not one that sells. 


 
Hannah Zeavin

Hannah Zeavin is a historian at UC Berkeley. She is the author of The Distance Cure: A History of Teletherapy (MIT Press, 2021). She is working on her third book, All Freud’s Children: A Story of Inheritance.

Previous
Previous

We Will Never Be Friends

Next
Next

Super Vision