Need to stop reading?
We’ll send you a reminder.
The psychiatric literature is so confusing that even the dissidents disagree. Credit Photograph by Dan Winters
You arrive for work and someone informs you that you have until five o’clock to clean out your office. You have been laid off. At first, your family is brave and supportive, and although you’re in shock, you convince yourself that you were ready for something new. Then you start waking up at 3 A.M., apparently in order to stare at the ceiling. You can’t stop picturing the face of the employee who was deputized to give you the bad news. He does not look like George Clooney. You have fantasies of terrible things happening to him, to your boss, to George Clooney. You find—a novel recognition—not only that you have no sex drive but that you don’t care. You react irritably when friends advise you to let go and move on. After a week, you have a hard time getting out of bed in the morning. After two weeks, you have a hard time getting out of the house. You go see a doctor. The doctor hears your story and prescribes an antidepressant. Do you take it?
However you go about making this decision, do not read the psychiatric literature. Everything in it, from the science (do the meds really work?) to the metaphysics (is depression really a disease?), will confuse you. There is little agreement about what causes depression and no consensus about what cures it. Virtually no scientist subscribes to the man-in-the-waiting-room theory, which is that depression is caused by a lack of serotonin, but many people report that they feel better when they take drugs that affect serotonin and other brain chemicals.
There is suspicion that the pharmaceutical industry is cooking the studies that prove that antidepressant drugs are safe and effective, and that the industry’s direct-to-consumer advertising is encouraging people to demand pills to cure conditions that are not diseases (like shyness) or to get through ordinary life problems (like being laid off). The Food and Drug Administration has been accused of setting the bar too low for the approval of brand-name drugs. Critics claim that health-care organizations are corrupted by industry largesse, and that conflict-of-interest rules are lax or nonexistent. Within the profession, the manual that prescribes the criteria for official diagnoses, the Diagnostic and Statistical Manual of Mental Disorders, known as the D.S.M., has been under criticism for decades. And doctors prescribe antidepressants for patients who are not suffering from depression. People take antidepressants for eating disorders, panic attacks, premature ejaculation, and alcoholism.
These complaints are not coming just from sociologists, English professors, and other troublemakers; they are being made by people within the field of psychiatry itself. As a branch of medicine, depression seems to be a mess. Business, however, is extremely good. Between 1988, the year after Prozac was approved by the F.D.A., and 2000, adult use of antidepressants almost tripled. By 2005, one out of every ten Americans had a prescription for an antidepressant. IMS Health, a company that gathers data on health care, reports that in the United States in 2008 a hundred and sixty-four million prescriptions were written for antidepressants, and sales totalled $9.6 billion. As a depressed person might ask, What does it all mean?
Two new books, Gary Greenberg’s “Manufacturing Depression” (Simon & Schuster; $27) and Irving Kirsch’s “The Emperor’s New Drugs” (Basic; $23.95), suggest that dissensus prevails even among the dissidents. Both authors are hostile to the current psychotherapeutic regime, but for reasons that are incompatible. Greenberg is a psychologist who has a practice in Connecticut. He is an unusually eloquent writer, and his book offers a grand tour of the history of modern medicine, as well as an up-close look at contemporary practices, including clinical drug trials, cognitive-behavioral therapy, and brain imaging. The National Institute of Mental Health estimates that more than fourteen million Americans suffer from major depression every year, and more than three million suffer from minor depression (whose symptoms are milder but last longer than two years). Greenberg thinks that numbers like these are ridiculous—not because people aren’t depressed but because, in most cases, their depression is not a mental illness. It’s a sane response to a crazy world.
Greenberg basically regards the pathologizing of melancholy and despair, and the invention of pills designed to relieve people of those feelings, as a vast capitalist conspiracy to paste a big smiley face over a world that we have good reason to feel sick about. The aim of the conspiracy is to convince us that it’s all in our heads, or, specifically, in our brains—that our unhappiness is a chemical problem, not an existential one. Greenberg is critical of psychopharmacology, but he is even more critical of cognitive-behavioral therapy, or C.B.T., a form of talk therapy that helps patients build coping strategies, and does not rely on medication. He calls C.B.T. “a method of indoctrination into the pieties of American optimism, an ideology as much as a medical treatment.”
In fact, Greenberg seems to believe that contemporary psychiatry in most of its forms except existential-humanistic talk therapy, which is an actual school of psychotherapy, and which appears to be what he practices, is mainly about getting people to accept current arrangements. And it’s not even that drug companies and the psychiatric establishment have some kind of moral or political stake in these arrangements—that they’re in the game in order to protect the status quo. They just see, in the world’s unhappiness, a chance to make money. They invented a disease so that they could sell the cure.
Greenberg is repeating a common criticism of contemporary psychiatry, which is that the profession is creating ever more expansive criteria for mental illness that end up labelling as sick people who are just different—a phenomenon that has consequences for the insurance system, the justice system, the administration of social welfare, and the cost of health care.
Jerome Wakefield, a professor of social work at New York University, has been calling out the D.S.M. on this issue for a number of years. In “The Loss of Sadness” (2007), Wakefield and Allan Horwitz, a sociologist at Rutgers, argue that the increase in the number of people who are given a diagnosis of depression suggests that what has changed is not the number of people who are clinically depressed but the definition of depression, which has been defined in a way that includes normal sadness. In the case of a patient who exhibits the required number of symptoms, the D.S.M. specifies only one exception to a diagnosis of depression: bereavement. But, Wakefield and Horwitz point out, there are many other life problems for which intense sadness is a natural response—being laid off, for example. There is nothing in the D.S.M. to prevent a physician from labelling someone who is living through one of these problems mentally disordered.
The conversion of stuff that people used to live with into disorders that physicians can treat is not limited to psychiatry, of course. Once, people had heartburn (“I can’t believe I ate the whole thing”) and bought Alka-Seltzer over the counter; now they are given a diagnosis of gastroesophageal reflux disease (“Ask your doctor whether you might be suffering from GERD”) and are written a prescription for Zantac. But people tend to find the medicalization of mood and personality more distressing. It has been claimed, for example, that up to 18.7 per cent of Americans suffer from social-anxiety disorder. In “Shyness” (2007), Christopher Lane, a professor of English at Northwestern, argues that this is a blatant pathologization of a common personality trait for the financial benefit of the psychiatric profession and the pharmaceutical industry. It’s a case of what David Healy, in his invaluable history “The Antidepressant Era” (1997), calls “the pharmacological scalpel”: if a drug (in this case, Paxil) proves to change something in patients (shyness), then that something becomes a disorder to be treated (social anxiety). The discovery of the remedy creates the disease.
Turning shyness into a mental disorder has many downstream consequences. As Steven Hyman, a former director of the National Institute of Mental Health, argues in a recent article, once a diagnosis is ensconced in the manual, it is legitimatized as a subject of scientific research. Centers are established (there is now a Shyness Research Institute, at Indiana University Southeast) and scientists get funding to, for example, find “the gene for shyness”—even though there was never any evidence that the condition has an organic basis. A juggernaut effect is built into the system.
Irving Kirsch is an American psychologist who now works in the United Kingdom. Fifteen years ago, he began conducting meta-analyses of antidepressant drug trials. A meta-analysis is a statistical abstract of many individual drug trials, and the method is controversial. Drug trials are designed for different reasons—some are done to secure government approval for a new drug, and some are done to compare treatments—and they have different processes for everything from selecting participants to measuring outcomes. Adjusting for these differences is complicated, and Kirsch’s early work was roundly criticized on methodological grounds by Donald Klein, of Columbia University, who was one of the key figures in the transformation of psychiatry to a biologically based practice. But, as Kirsch points out, meta-analyses have since become more commonly used and accepted.
Kirsch’s conclusion is that antidepressants are just fancy placebos. Obviously, this is not what the individual tests showed. If they had, then none of the drugs tested would have received approval. Drug trials normally test medications against placebos—sugar pills—which are given to a control group. What a successful test typically shows is a small but statistically significant superiority (that is, greater than could be due to chance) of the drug to the placebo. So how can Kirsch claim that the drugs have zero medicinal value?
His answer is that the statistical edge, when it turns up, is a placebo effect. Drug trials are double-blind: neither the patients (paid volunteers) nor the doctors (also paid) are told which group is getting the drug and which is getting the placebo. But antidepressants have side effects, and sugar pills don’t. Commonly, side effects of antidepressants are tolerable things like nausea, restlessness, dry mouth, and so on. (Uncommonly, there is, for example, hepatitis; but patients who develop hepatitis don’t complete the trial.) This means that a patient who experiences minor side effects can conclude that he is taking the drug, and start to feel better, and a patient who doesn’t experience side effects can conclude that she’s taking the placebo, and feel worse. On Kirsch’s calculation, the placebo effect—you believe that you are taking a pill that will make you feel better; therefore, you feel better—wipes out the statistical difference.
One objection to Kirsch’s argument is that response to antidepressants is extremely variable. It can take several different prescriptions to find a medication that works. Measuring a single antidepressant against a placebo is not a test of the effectiveness of antidepressants as a category. And there is a well-known study, called the Sequenced Treatment Alternatives to Relieve Depression, or STAR*D trial, in which patients were given a series of different antidepressants. Though only thirty-seven per cent recovered on the first drug, another nineteen per cent recovered on the second drug, six per cent on the third, and five per cent after the fourth—a sixty-seven-per-cent effectiveness rate for antidepressant medication, far better than the rate achieved by a placebo.
Kirsch suggests that the result in STAR*D may be one big placebo effect. He cites a 1957 study at the University of Oklahoma in which subjects were given a drug that induced nausea and vomiting, and then another drug, which they were told prevents nausea and vomiting. After the first anti-nausea drug, the subjects were switched to a different anti-nausea drug, then a third, and so on. By the sixth switch, a hundred per cent of the subjects reported that they no longer felt nauseous—even though every one of the anti-nausea drugs was a placebo.
Kirsch concludes that since antidepressants have no more effectiveness than sugar pills, the brain-chemistry theory of depression is “a myth.” But, if this is so, how should we treat depression? Kirsch has an answer: C.B.T. He says it really works.
Kirsch’s claims appeared to receive a big boost from a meta-analysis published in January in the Journal of the American Medical Association and widely reported. The study concludes that “there is little evidence” that antidepressants are more effective than a placebo for minor to moderate depression. But, as a Cornell psychiatrist, Richard Friedman, noted in a column in the Times, the meta-analysis was based on just six trials, with a total of seven hundred and eighteen subjects; three of those trials tested Paxil, and three tested imipramine, one of the earliest antidepressants, first used in 1956. Since there have been hundreds of antidepressant drug trials and there are around twenty-five antidepressants on the market, this is not a large sample. The authors of the meta-analysis also assert that “for patients with very severe depression, the benefit of medications over placebo is substantial”—which suggests that antidepressants do affect mood through brain chemistry. The mystery remains unsolved.
Apart from separating us unnecessarily from our money, it’s hard to see how a pill that does nothing can also be bad for us. If Kirsch is right and antidepressant drugs aren’t doing anything consequential to our brains, then it can’t also be the case that they are turning us into Stepford wives or Nietzsche’s “last men,” the sort of thing that worries Greenberg. By Kirsch’s account, we are in danger of bankrupting our health-care system by spending nearly ten billion dollars a year on worthless pills. But if Greenberg is right we’re in danger of losing our ability to care. Is psychopharmacology evil, or is it useless?
The question has been around since the time of Freud. The profession has been the perennial target of critics who, like Greenberg, accuse it of turning deviance into a disorder, and of confusing health with conformity. And it has also been rocked many times by studies that, like Kirsch’s, cast doubt on the scientific validity of the entire enterprise.
One of the oldest complaints is that the diagnostic categories psychiatrists use don’t match up with the conditions patients have. In 1949, Philip Ash, an American psychologist, published a study in which he had fifty-two mental patients examined by three psychiatrists, two of them, according to Ash, nationally known. All the psychiatrists reached the same diagnosis only twenty per cent of the time, and two were in agreement less than half the time. Ash concluded that there was a severe lack of fit between diagnostic labels and, as he put it, “the complexities of the biodynamics of mental structure”—that is, what actually goes on in people’s minds.
In 1952, a British psychologist, Hans Eysenck, published a summary of several studies assessing the effectiveness of psychotherapy. “There . . . appears to be an inverse correlation between recovery and psychotherapy,” Eysenck dryly noted. “The more psychotherapy, the smaller the recovery rate.”
Later studies have shown that patients suffering from depression and anxiety do equally well when treated by psychoanalysts and by behavioral therapists; that there is no difference in effectiveness between C.B.T., which focusses on the way patients reason, and interpersonal therapy, which focusses on their relations with other people; and that patients who are treated by psychotherapists do no better than patients who meet with sympathetic professors with no psychiatric training. Depressed patients in psychotherapy do no better or worse than depressed patients on medication. There is little evidence to support the assumption that supplementing antidepressant medication with talk therapy improves outcomes. What a load of evidence does seem to suggest is that care works for some of the people some of the time, and it doesn’t much matter what sort of care it is. Patients believe that they are being cared for by someone who will make them feel better; therefore, they feel better. It makes no difference whether they’re lying on a couch interpreting dreams or sitting in a Starbucks discussing the concept of “flow.”
Psychiatry has also been damaged by some embarrassing exposés, such as David Rosenhan’s famous article “On Being Sane in Insane Places” (1973), which described the inability of hospital psychiatrists to distinguish mentally ill patients from impostors. The procedure used to determine the inclusion or exclusion of diagnoses in the D.S.M. has looked somewhat unseemly from a scientific point of view. Homosexuality, originally labelled a sociopathic personality disorder, was eliminated from the D.S.M. in 1973, partly in response to lobbying by gay-rights groups. The manual then inserted the category “ego-dystonic homosexuality”—distress because of the presence of homosexual arousal or the absence of heterosexual arousal. Further lobbying eliminated this category as well. Post-traumatic stress disorder was lobbied for by veterans’ organizations and resisted by the Veterans Administration, and got in, while self-defeating personality disorder was lobbied against by women’s groups, and was deleted.
And there was the rapid collapse of Freudianism. The first two editions of the D.S.M. (the first was published in 1952, the second in 1968) reflected the psychoanalytic theories of Freud and of the Swiss émigré Adolf Meyer, who emphasized the importance of patients’ life histories and everyday problems. But the third edition, published in 1980, began a process of scrubbing Freudianism out of the manual, and giving mental health a new language. As Healy puts it, “Where once lay people had gone to psychiatrists expecting to hear about sexual repression, they now came knowing that something might be wrong with their amines or with some brain chemical.” A vocabulary that had sunk deep into the popular culture—neurotic, anal, Oedipal—was wiped out of the discipline.
“The chef suggests you help him unload last week’s salmon.”Buy the print »
Finally, there has been a blare of criticism surrounding the role of the pharmaceutical industry in research and testing. The industry funds much of the testing done for the F.D.A. Drug companies donate money to hospitals, sponsor posh conferences in exotic locations, provide inducements to physicians to prescribe their drugs, lobby the F.D.A. and Congress—for example, successfully to prevent Medicare from using its bargaining leverage to reduce the price of medications—and generally use their profits to keep a seat at every table.
So the antidepressant business looks like a demolition derby—a collision of negative research results, questionable research and regulatory practices, and popular disenchantment with the whole pharmacological regime. And it may soon turn into something bigger, something more like a train wreck. If it does, it’s worth remembering that we have seen this movie before.
The early history of psychopharmacology is characterized by serendipitous discoveries, and mephenesin was one of them. A Czech émigré named Frank Berger, working in England in the nineteen-forties, was looking for a preservative for penicillin, a drug much in demand by the military. He found that mephenesin had a tranquillizing effect on mice, and published a paper announcing this result in 1946. After the war, Berger moved to the United States and eventually took a job with the drug company that became Carter-Wallace, where he synthesized a compound related to mephenesin called meprobamate. In 1955, Carter-Wallace launched meprobamate as a drug to relieve anxiety. The brand name it invented was Miltown.
Miltown, Andrea Tone says in her cultural history of tranquillizers, “The Age of Anxiety” (2009), was “the first psychotropic blockbuster and the fastest-selling drug in U.S. history.” Within a year, one out of every twenty Americans had taken Miltown; within two years, a billion tablets had been manufactured. By the end of the decade, Miltown and Equanil (the same chemical, licensed from Carter-Wallace by a bigger drug company, Wyeth) accounted for a third of all prescriptions written by American physicians. These drugs were eclipsed in the nineteen-sixties by two other wildly popular anxiolytics (anti-anxiety drugs): Librium and Valium, introduced in 1960 and 1963. Between 1968 and 1981, Valium was the most frequently prescribed medication in the Western world. In 1972, stock in its manufacturer, Hoffmann-La Roche, traded at seventy-three thousand dollars a share.
As Tone and David Herzberg, in his cultural history of psychiatric drugs, “Happy Pills in America” (2008)—the books actually complement each other nicely—both point out, the anxiolytics were enmeshed in exactly the same scientific, financial, and ethical confusion as antidepressants today. The F.D.A. did not permit direct-to-consumer—“Ask your doctor”—advertising until 1985, but the tranquillizer manufacturers invested heavily in promotion. They sent “detail men”—that is, salesmen—to teach physicians about the wonders of their medications. Carter-Wallace was an exception to this, because Berger disliked the idea of salesmanship, but the company took out the front-cover advertisement in the American Journal of Psychiatry every month for ten years.
Tranquillizers later became associated with the subjugation of women—“mother’s little helpers”—but Miltown was marketed to men, and male celebrities were enlisted to promote the drug. It was particularly popular in Hollywood. Anxiety was pitched as the disorder of high-functioning people, the cost of success in a competitive world. Advertisements for Equanil explained that “anxiety and tension are the commonplace of the age.” People on anxiolytics reported that they had never felt this well before—much like the patients Peter Kramer describes in “Listening to Prozac” (1993) who told him that they were “better than well.”
Miltown seemed to fit perfectly with the state of psychiatry in the nineteen-fifties. Freud himself had called anxiety “a riddle whose solution would be bound to throw a flood of light on our whole mental existence,” and the first edition of the D.S.M. identified anxiety as the “chief characteristic” of all neuroses. The D.S.M. was not widely used in the nineteen-fifties (the situation changed dramatically after 1980), but the idea that anxiety is central to the modern psyche was the subject of two popular books by mental-health professionals, Rollo May’s “The Meaning of Anxiety” (1950) and Hans Selye’s “The Stress of Life” (1956). (Selye was the person who coined the term “stressor.”)
There was a cultural backlash as well. People worried that tranquillizers would blunt America’s competitive edge. Business Week wrote about the possibility of “tranquil extinction.” The Nation suggested that tranquillizers might be more destructive than the bomb: “As we watch over the decline of the West, we see the beams—the bombs and the missiles; but perhaps we miss the motes—the pretty little pills.”
The weird part of it all was that, for a long time, no one was listening to Miltown. Meprobamate carved out an area of mental functioning and fired a chemical at it, a magic bullet, and the bullet made the condition disappear. What Miltown was saying, therefore, was that the Freudian theory that neuroses are caused by conflicts between psychic drives was no longer relevant. If you can cure your anxiety with a pill, there is no point spending six years on the couch. And yet, in the nineteen-fifties, references to Freud appeared alongside references to tranquillizers with no suggestion of a contradiction. It took landmark articles by Joseph Schildkraut, in 1965, proposing the amine theory of depression (the theory that Kirsch thinks is a myth), and by Klein (Kirsch’s early critic), called “Anxiety Reconceptualized,” in 1980, to expose the disjunction within the profession.
The train wreck for tranquillizers arrived in two installments. The first was the discovery that thalidomide, which was prescribed as a sedative, caused birth defects. This led to legislation giving the F.D.A. power to monitor the accuracy of drug-company promotional claims, which slowed down the marketing juggernaut. The second event was the revelation that Valium and Librium can be addictive. In 1980, the F.D.A. required that anxiety medications carry a warning stating that “anxiety or tension associated with the stress of everyday life usually does not require treatment with an anxiolytic.” The anxiety era was over. This is one of the reasons that when the SSRIs, such as Prozac, came on the market they were promoted as antidepressants—even though they are commonly prescribed for anxiety. Anxiety drugs had acquired a bad name.
The position behind much of the skepticism about the state of psychiatry is that it’s not really science. “Cultural, political, and economic factors, not scientific progress, underlie the triumph of diagnostic psychiatry and the current ‘scientific’ classification of mental illness entities,” Horwitz complained in an earlier book, “Creating Mental Illness” (2002), and many people echo his charge. But is this in fact the problem? The critics who say that psychiatry is not really science are not anti-science themselves. On the contrary: they hold an exaggerated view of what science, certainly medical science, and especially the science of mental health, can be.
Progress in medical science is made by lurching around. The best that can be hoped is that we are lurching in an over-all good direction. One common criticism of contemporary psychiatry has to do with the multiplication of mental disorders. D.S.M.–II listed a hundred and eighty-two diagnoses; the current edition, D.S.M.-IV-T.R., lists three hundred and sixty-five. There is a reason for this. The goal of biological psychiatry is to identify the organic conditions underlying the symptoms of mental distress that patients complain of. (This was Freud’s goal, too, though he had a completely different idea of what the mental events were to which the organic conditions corresponded.) The hope is to establish psychiatry firmly on the disease model of medical practice. In most cases, though, the underlying conditions are either imperfectly known or not known at all. So the D.S.M. lists only disorders—clusters of symptoms, drawn from clinical experience—not diseases. Since people manifest symptoms in an enormous variety of combinations, we get a large number of disorders for what may be a single disease.
Depression is a good example of the problem this makes. A fever is not a disease; it’s a symptom of disease, and the disease, not the symptom, is what medicine seeks to cure. Is depression—insomnia, irritability, lack of energy, loss of libido, and so on—like a fever or like a disease? Do patients complain of these symptoms because they have contracted the neurological equivalent of an infection? Or do the accompanying mental states (thoughts that my existence is pointless, nobody loves me, etc.) have real meaning? If people feel depressed because they have a disease in their brains, then there is no reason to pay much attention to their tales of woe, and medication is the most sensible way to cure them. Peter Kramer, in “Against Depression” (2005), describes a patient who, after she recovered from depression, accused him of taking what she had said in therapy too seriously. It was the depression talking, she told him, not her.
Depression often remits spontaneously, perhaps in as many as fifty per cent of cases; but that doesn’t mean that there isn’t something wrong in the brain of depressed people. Kramer claims that there is a demonstrated link between depression and ill health. Even minor depression raises the risk of death from cardiac disease by half, he says, and contracting depression once increases a patient’s susceptibility later in life. Kramer thinks that the notion that depression affords us, as Greenberg puts it, “some glimpse of the way things are” is a myth standing in the way of treating a potentially dangerous disease of the brain. He compares it to the association of tuberculosis with refinement in the nineteenth century, an association that today seems the opposite of enlightened. “Against Depression” is a plea to attack a biochemical illness with chemicals.
Is depression overdiagnosed? The disease model is no help here. If you have a fever, the doctor runs some tests in order to find out what your problem is. The tests, not the fever, identify the disease. The tests determine, in fact, that there is a disease. In the case of mood disorders, it is difficult to find a test to distinguish mental illness from normal mood changes. The brains of people who are suffering from mild depression look the same on a scan as the brains of people whose football team has just lost the Super Bowl. They even look the same as the brains of people who have been asked to think sad thoughts. As Freud pointed out, you can’t distinguish mourning from melancholy just by looking. So a psychiatrist who diagnoses simply by checking off the symptoms listed in the D.S.M. will, as Wakefield and others complain, end up with a lot of false positives. The anti-Freudian bias against the relevance of life histories leaves a lot of holes. But bringing life histories back into the picture isn’t going to make diagnoses any more scientific.
Science, particularly medical science, is not a skyscraper made of Lucite. It is a field strewn with black boxes. There have been many medical treatments that worked even though, for a long time, we didn’t know why they worked—aspirin, for example. And drugs have often been used to carve out diseases. Malaria was “discovered” when it was learned that it responded to quinine. Someone was listening to quinine. As Nicholas Christakis, a medical sociologist, has pointed out, many commonly used remedies, such as Viagra, work less than half the time, and there are conditions, such as cardiovascular disease, that respond to placebos for which we would never contemplate not using medication, even though it proves only marginally more effective in trials. Some patients with Parkinson’s respond to sham surgery. The ostensibly shaky track record of antidepressants does not place them outside the pharmacological pale.
The assumption of many critics of contemporary psychiatry seems to be that if the D.S.M. “carved nature at the joints,” if its diagnoses corresponded to discrete diseases, then all those categories would be acceptable. But, as Elliot Valenstein (no friend of biochemical psychiatry) points out in “Blaming the Brain” (1998), “at some period in history the cause of every ‘legitimate’ disease was unknown, and they all were at one time ‘syndromes’ or ‘disorders’ characterized by common signs and symptoms.”
D.S.M.-III was created to address a problem. The problem was reliability, and the manual was an attempt to get the profession on the same page so that every psychiatrist would make the same diagnosis for a given set of symptoms. The manual did not address a different problem, which is validity—the correspondence of symptoms to organic conditions. But if we couldn’t treat psychiatric patients until we were certain what the underlying pathology was, we would not be treating most patients. For some disorders, such as depression, we may never know, in any useful way, what the underlying pathology is, since we can’t distinguish biologically patients who are suffering from depression from patients who are enduring a depressing life problem.
For many people, this is the most troubling aspect of contemporary psychiatry. These people worry that an easy way is now available to jump the emotional queue, that people can now receive medical enhancements who do not “deserve” them. For example, would you take an antidepressant to get over the pain of being laid off? You might, if you reasoned that since your goal is to get over it and move on, there is no point in prolonging the agony. But you might also reason that learning how to cope with difficulty without a therapeutic crutch is something that it would be good to take away from this disaster. This is not a problem we should expect science to solve for us someday. It’s not even a problem that we should want science to solve for us.
Mental disorders sit at the intersection of three distinct fields. They are biological conditions, since they correspond to changes in the body. They are also psychological conditions, since they are experienced cognitively and emotionally—they are part of our conscious life. And they have moral significance, since they involve us in matters such as personal agency and responsibility, social norms and values, and character, and these all vary as cultures vary.
Many people today are infatuated with the biological determinants of things. They find compelling the idea that moods, tastes, preferences, and behaviors can be explained by genes, or by natural selection, or by brain amines (even though these explanations are almost always circular: if we do x, it must be because we have been selected to do x). People like to be able to say, I’m just an organism, and my depression is just a chemical thing, so, of the three ways of considering my condition, I choose the biological. People do say this. The question to ask them is, Who is the “I” that is making this choice? Is that your biology talking, too?
The decision to handle mental conditions biologically is as moral a decision as any other. It is a time-honored one, too. Human beings have always tried to cure psychological disorders through the body. In the Hippocratic tradition, melancholics were advised to drink white wine, in order to counteract the black bile. (This remains an option.) Some people feel an instinctive aversion to treating psychological states with pills, but no one would think it inappropriate to advise a depressed or anxious person to try exercise or meditation.
The recommendation from people who have written about their own depression is, overwhelmingly, Take the meds! It’s the position of Andrew Solomon, in “The Noonday Demon” (2001), a wise and humane book. It’s the position of many of the contributors to “Unholy Ghost” (2001) and “Poets on Prozac” (2008), anthologies of essays by writers about depression. The ones who took medication say that they write much better than they did when they were depressed. William Styron, in his widely read memoir “Darkness Visible” (1990), says that his experience in talk therapy was a damaging waste of time, and that he wishes he had gone straight to the hospital when his depression became severe.
What if your sadness was grief, though? And what if there were a pill that relieved you of the physical pain of bereavement—sleeplessness, weeping, loss of appetite—without diluting your love for or memory of the dead? Assuming that bereavement “naturally” remits after six months, would you take a pill today that will allow you to feel the way you will be feeling six months from now anyway? Probably most people would say no.
Is this because of what the psychiatrist Gerald Klerman once called “pharmacological Calvinism”? Klerman was describing the view, which he thought many Americans hold, that shortcuts to happiness are sinful, that happiness is not worth anything unless you have worked for it. (Klerman misunderstood Calvinist theology, but never mind.) We are proud of our children when they learn to manage their fears and perform in public, and we feel that we would not be so proud of them if they took a pill instead, even though the desired outcome is the same. We think that sucking it up, mastering our fears, is a sign of character. But do we think that people who are naturally fearless lack character? We usually think the opposite. Yet those people are just born lucky. Why should the rest of us have to pay a price in dread, shame, and stomach aches to achieve a state of being that they enjoy for nothing?
Or do we resist the grief pill because we believe that bereavement is doing some work for us? Maybe we think that since we appear to have been naturally selected as creatures that mourn, we shouldn’t short-circuit the process. Or is it that we don’t want to be the kind of person who does not experience profound sorrow when someone we love dies? Questions like these are the reason we have literature and philosophy. No science will ever answer them. ♦