1
RISING RATES
Between 2008 and 2018, prescriptions for antidepressant medications in the UK increased from 36 million to 70.9 million.1 A study of twenty-nine countries—including Australia, Canada, and many in Europe—found that all of them had seen an increase in antidepressant prescriptions between 2000 and 2015, doubling on average.2 In the US, between 2009–2010 and 2017–2018, the proportion of adults taking antidepressants increased from 10.6 percent to 13.8 percent.3 Antidepressants are used to treat many disorders, including depression, OCD and anxiety disorders, so if more people are taking them now than in the past, this could be an indication that mental illness is on the up. By this metric, the number of people with mental illness has exploded.
But of course, there are other possible explanations for the rise in prescriptions. It could be that thanks to increased public awareness and the drive to destigmatize mental illness, people are more willing to seek help when their distress becomes unmanageable. This would be a wholly good thing. It could also be that GPs and other prescribers are getting better at identifying and diagnosing genuine cases of mental illness—again, a good thing. Alternatively, prescriptions might be increasing because some doctors are overprescribing: handing out prescriptions too readily for milder problems, or prescribing because of a lack of available alternatives like talk therapy. All of these could be true at the same time.
If we want to know whether rates of mental illness are genuinely increasing, then ideally we don’t want to rely only on data from people who are already seeking help. A far better option is to measure the prevalence of symptoms in the general population and see if those have increased. In contrast to “clinical” samples, in which the participants being measured are already seeking or getting help, “community” samples are composed of a representative group of people from across society: a range of ages, genders, and ethnicities, for example. Unless people with mental illness diagnoses are actively screened and excluded from the study, and assuming the community sample is large enough, it should provide an illustrative snapshot of the population as a whole, containing people who fall across the full spectrum of mental health and illness.
Studies that take this approach use two main methods to assess symptoms: self-report questionnaires and interviews. Self-report questionnaires are quick and easy to use, so can be distributed to large groups of participants. They present people with a list of symptoms, like “I felt down about myself” or “Someone was controlling my thoughts,” and the respondent then decides how much they have experienced that symptom over a set period of time—the last two weeks, say, or the last six months—usually by selecting from a limited range of possible answers, such as a five-point scale from “Strongly agree” to “Strongly disagree.”
There are many well-established, robust questionnaires that measure symptoms of mental illness, and they cover all different disorders. For example, a thirteen-item questionnaire called the Short Mood and Feelings Questionnaire (SMFQ) was developed in 1995 as a brief measure of depressive symptoms in children and adolescents.4 Respondents read statements such as “I didn’t enjoy anything at all,” “I was a bad person,” and “I felt lonely,” and answer whether each item applied to them over the preceding two weeks using a three-point scale: Not True (0), Sometimes (1), or True (2).
The important thing to remember about self-report questionnaires is that they are not designed to be definitive measures of a disorder. As we will see, context is important when it comes to understanding mental health symptoms, and there’s no context at all when it comes to answers on a questionnaire. For example, someone could score relatively highly on a depression measure if they’ve recently received some bad news or gone through a big life change, but it doesn’t mean they’re clinically depressed. Instead, researchers try to choose a cut-off score that indicates a person might be depressed—those who use the SMFQ, for example, often use a threshold of twelve or above to indicate possible mental illness5—but they warn that this isn’t clear cut. The creators of the SMFQ specifically state that such measures “might better be regarded as ‘nets’ than as screens … [they] will miss a number of cases of interest and pick up a good deal of other material.” In other words, a high score on a measure like the SMFQ is a useful indicator that you might want to assess someone further for potential depression, but in and of itself, it’s not a measure of the disorder.
This detail often gets lost though. Consider a study reported in a 2017 Guardian article under the headline “One in Four Girls Have Depression by the Time They Hit 14.”6 This was a population-based study in which more than 11,000 young people completed the SMFQ. The researchers found that 24 percent of fourteen-year-old girls scored at or above the cut-off score of twelve—the figure that made the headline—as did 9 percent of boys. But from a self-report questionnaire alone, we can’t know for sure how many of these young people really had depression. What the study actually showed was that one in four girls reported a reasonably high number of negative feelings in the previous two weeks, like feeling tired or lonely or down about themselves. That’s bad, for sure—but we have to be very careful about equating this with a depressive disorder.
One way around this issue is to use clinical interviews. These are similar to questionnaires in that they ask the person about a series of symptoms. However, they are often considered superior to and more reliable than questionnaires as they are conducted by a trained professional and allow the opportunity for the interviewer to ask follow-up questions about severity and duration of symptoms. But bear in mind they’re not perfect either: they’re more costly and time consuming to administer, generally meaning that fewer participants can be included. Also, the interviewers aren’t necessarily clinically trained. US psychiatrist Allen Frances—who has helped devise guidelines for what counts as a mental disorder—has expressed concern about the impact of this. In fact, he says this issue means the oft-quoted “one in four” stat is probably an overestimate:
Never believe the extremely high rates of mental disorders routinely reported by epidemiological studies in psychiatry—usually labeling about 25 percent of the general population as mentally ill in the past year.… Phone surveys are done by non-clinicians following a highly structured format that allows no clinical judgment whether the symptoms reported cause sufficient clinically significant distress and impairment to qualify as a mental disorder.… The rates reported in studies are really only upper limits, not accurate approximations of true rates. They should be, but never are, reported as such.7
Both questionnaires and interviews are therefore likely overestimates, and only ever give an indication of a disorder, but assuming the same specific questionnaire or interview is used throughout the time period being examined, they are the best we have and better than relying on the number of people seeking help from their doctor. So, bearing in mind all of these qualifications, does the data suggest that rates of mental illness are increasing?
THE EVIDENCE
In 2012, the psychologist and sociologist Joan Busfield published a paper examining exactly this question.8 In it, she reviewed lots of studies that used self-report questionnaires and clinical interviews to measure mental illness at different points in time, the earliest dating from 1952 and the most recent from 2007. There was a mix of results, with some finding an increase in rates, and some finding rates were stable. However, based on the best-quality studies, the evidence was in favor of the latter: no significant change in rates of mental illness in the time period up to 2007. This is how she summarized her findings:
The claims [of increased mental illness and decreased mental well-being] are based on weak foundations. While there is some evidence to support such claims, equally there is plenty of evidence that does not. Indeed, when the same or similar structured diagnostic instruments and similar diagnostic criteria are used to measure the same disorders … the comparison indicates that mental illness has not been increasing over the longer term.
Interesting stuff—but this only takes us up to 2007. It’s perfectly possible that rates didn’t increase back then, but that there’s been a shift more recently.
In 2019, a meta-analysis led by sociologist Dirk Richter looked at data from forty-two studies from all over the world—mainly Western Europe and North America, but also Asia, South America, and Australia.9 There was a combined sample size of 1,035,697 adult participants, and the studies used either self-report questionnaires, clinical interviews, or both. Some measured specific disorders: anxiety disorders, substance abuse, or depression; others measured what the meta-analysis authors called “psychological distress,” a broader concept essentially covering both anxiety and depressive symptoms. In all forty-two studies, data was collected at two different time points, between two and twenty-nine years apart. The earliest was 1978, and the latest was 2015. They found that, averaged across all forty-two studies, there was a statistically significant increase in these types of mental illness and mental distress across time, but it was “small.” They note that this finding is “obviously at odds with the evidence of a tremendous increase in mental healthcare utilization” and also with the “public impression of a mental health epidemic during the same period.”
There is evidence of a similar rise in mental disorders in adolescents. One UK study, led by the National Health Service (NHS), used clinical interviews with approximately 9,000 five-to-fifteen-year-olds, carried out in 1999, 2004, and 2017.10 The interviews covered lots of different potential diagnoses: depression and anxiety disorders, but also others like conduct disorder (characterized by antisocial behavior), hyperactivity disorders like ADHD, and eating disorders. The authors found a “slight increase” in overall rates of depression and anxiety disorders from 1999 to 2017, while the prevalence of other disorders remained constant. In 1999, 9.7 percent of five-to-fifteen-year-olds had a diagnosable disorder; in 2004, this figure increased to 10.1 percent; and in 2017 it was 11.2 percent. So again, we see a similar story: there is evidence of an increased prevalence of mental illness, but the increase is relatively small. Tamsin Ford, a child psychiatrist who developed the survey, said, “It was smaller than we thought.… It’s not huge, not the epidemic you see reported.”11
SELF-HARM
One of our most fundamental biological drives, one we share with all other animals, is the desire to avoid pain. We’re wary of trying risky activities in case we injure ourselves, and we learn very quickly to avoid things that hurt us. Our aversion to pain is ancient, and makes perfect evolutionary sense: we avoid pain to keep ourselves safe. And yet, a significant minority of people behave in ways that fly directly in the face of this. Some people deliberately injure themselves, most usually by cutting the skin with a blade, but sometimes by other means like burning or hitting themselves. This is self-harm: the deliberate, intentional act of damaging your own body tissue. There is considerable current concern about an increase in these behaviors in particular, since they may be an indication of increased rates of mental distress and illness. So what exactly is the relationship here, and what does the data show about changing rates?
Most commonly, people self-harm because they feel overwhelming negative emotions, like sadness or guilt or shame. Pain is highly distracting, so the sensation helps to shift focus away from these intensely distressing thoughts and feelings. You might suppose that perhaps people who self-harm don’t feel physical pain as much as others do, which is why they are able to harm themselves, but the evidence to date suggests they are perfectly able to feel pain.12 In fact, many people self-harm because it’s painful, because of its power to distract. In the words of psychiatrist Bessel van der Kolk, people cut themselves “to replace overwhelming emotions with definable sensations.”13 The desire to escape their negative emotions is so great that the physical pain seems a price worth paying. In short, people self-harm as a form of emotion regulation. As we shall see later in the book, this is a process that develops in adolescence, and one that is disrupted pretty much across the board in many different mental disorders.
Some people self-harm for a slightly different reason—which might run in parallel with the above: they feel they deserve to be in pain. As we’ll see, your feelings toward yourself are an important part of many mental illnesses. A hallmark of depression, for example, is a feeling of worthlessness. People with depression don’t tend to like themselves. Other disorders, particularly eating disorders, often involve low levels of self-worth and high levels of self-criticism. Once a person dislikes or hates themselves, the idea of hurting themselves no longer seems so foreign. One study, for example, showed that people with high levels of self-criticism are able to endure pain for longer than those with low levels.14 Self-injury becomes a way of meting out a punishment they feel they deserve—for errors they feel they’ve made, or feelings or thoughts they think they shouldn’t experience. As Sian Bradley, a mental health campaigner, explained on an MQ Open Mind podcast:
It’s quite dangerous to just always assume that it’s because somebody might want to end their life, because I do think quite often [self-harm and suicide] are very separate. It’s always just been a way to … deal with these overwhelming emotions, to punish myself for imaginary crimes that I’ve done, for failures, for things like that. It’s not been anything related to wanting to end my life at all, it’s just been something to deal with the pain right now.15
It’s important to note that some people who self-harm once will rarely or never do so again, and some cause themselves only superficial injury (such as repeatedly scratching their skin). But for others, self-harm becomes chronic and severe—sometimes requiring hospital treatment—and is the only way they can find to cope with their feelings. This is why the oft-thrown criticism of people self-harming “to get attention” seems odd to me. People who self-harm are intensely distressed, and if that happens to be noticed—as a side effect—well, that’s good. They need to be noticed; they need support and help.
There is some ongoing debate about whether frequent and severe self-harm should be a mental illness in and of itself. But what we do know is that in people who have a mental illness, self-harm often appears. The exact number varies a lot between studies, but somewhere in the region of 30–82 percent of individuals with a mental disorder have self-harmed.16 It was once considered to be largely tied to borderline personality disorder, a disorder characterized by poor emotion regulation and problems with social relationships. But it’s actually a transdiagnostic phenomenon, appearing in depression, anxiety disorders, OCD, post-traumatic stress disorder, eating disorders, and substance use disorders, among others. In addition, even though the goal of self-harm is specifically not suicide (as Bradley says), people who self-harm are at increased risk of later suicide attempts. This all suggests that there is some relationship between self-harm and mental distress and disorder, which makes intuitive sense, particularly when this behavior is chronic and severe.
To return to our original question: the best available research suggests rates of self-harm are increasing in our society. One study compared rates of self-harm among fourteen-year-olds in 2005 and 2015.17 In 2005, the teenagers were asked whether they had ever tried to harm or kill themselves: 11.8 percent said yes. The 2015 group were asked a slightly different question (because the groups came from two different cohort studies): whether they had hurt themselves on purpose in any way in the past year. Because of the narrower parameter, you might expect fewer teenagers to have answered yes in 2015, all other things being equal. In fact, 14.5 percent said yes.
Another study conducted clinical interviews with a large group of sixteen-to-seventy-four-year-olds in England in 2000, 2007, and 2014.18 At each time point, between 6,000 and 7,000 people were interviewed. Across the whole data set, the researchers found that the number of people who reported ever engaging in self-harm (across their lifetime) increased from 2.4 percent in 2000 to 6.4 percent in 2014. The biggest increase was among females aged sixteen to twenty-four, whose levels of self-harm increased from 6.5 percent in 2000 to 19.7 percent in 2014. This is a really big jump, and one we need to consider carefully later in this book, when we think about why these rates might be increasing.
INCREASE IN SUICIDES
“Killing oneself is, anyway, a misnomer. We don’t kill ourselves. We are simply defeated by the long, hard struggle to stay alive.” So wrote Sally Brampton in her memoir about depression, Shoot the Damn Dog, published in 2008.19 I read it that year, in the months before I became unwell. When my depression hit, I wished I hadn’t. The book captures astutely the aching awfulness of depression, and her description of just how bad things could become haunted me. (This is why I generally don’t recommend memoirs of mental illness for people who are in the depths of it themselves—they are more useful for friends and families, to understand, or for when those who suffer are feeling better.) It’s ultimately a hopeful book—it’s about Brampton making sense of her depression and suicidal thoughts, and her gradual recovery. So my heart sank when, by chance, I read an article in 2016 saying she had died. She had taken her own life.
Around 800,000 people across the world take their own life each year.20 Brampton’s book stuck with me because it captures so well what suicide is about: it’s not about wanting to die, she wrote, so much as “a fervent wish not to go on living.” People die by suicide when they feel they can no longer stay alive. Sometimes people’s lives become sufficiently painful and difficult that they take their own lives in the absence of any diagnosable mental disorder, but most of the time they do have one. (The exact percentage is hard to ascertain. It is often quoted that around 90 percent of people who kill themselves had a mental disorder,21 but this figure is reached from “psychological autopsies,” which include interviewing family members about the deceased person to try and understand the contributing factors. It’s possible that, in the wake of a suicide, family members are particularly primed to look for signs of preceding mental illness.)
Globally, suicide rates are decreasing. Data from the Global Burden of Disease Study in 2016 showed that, when you combine information from sixty-three countries, rates dropped by a third between 1990 and 2016.22 This seems mostly to be driven by considerable declines in suicide in China and India—and these drops could be because of a host of reasons: economic growth, urbanization, better standards of living, improved access to medical care, and restricted access to certain methods of suicide. Around 80 percent of suicides occur in low- and middle-income countries, so these changes in quality of life make an important difference to worldwide rates.23 However, the concern is that, in the West, rates are actually increasing.
In 2019, the Office for National Statistics (ONS) released data about suicides in the UK in 2018. When everyone is grouped together (age and sex), it showed there was a small but significant increase from the previous year: there were 10.1 suicides per 100,000 population in 2017, and 11.2 suicides per 100,000 population in 2018. These rates followed several years of decline, matching levels not seen since 2013. A single-year increase like this shouldn’t immediately trigger alarm bells—the official report from the ONS said: “Suicide rates tend to fluctuate on a year-to-year basis. It is therefore too early to say whether the latest increase represents a change in the recent trend.”24
When we break things down by age and sex, however, something more concerning emerges. In females aged ten to twenty-four, the suicide rate has steadily increased for several years, from 1.8 deaths per 100,000 females in 2012 to 3.3 deaths per 100,000 in 2018. The collective attention about suicide is usually focused on males, as they make up 75 percent of suicide deaths (possibly because males experiencing psychological distress are less likely to seek help, so things escalate for them). But this data, combined with the self-harm data, suggests that something might be shifting in adolescent females. In the United States, meanwhile, data released in 2018 from the National Center for Health Statistics showed that, between 1999 and 2017, the rate of suicide has increased among both males and females, across all age groups.25 Among ten-to-twenty-four-year-olds specifically, the rate of suicide per 100,000 has increased from 3.5 to 7.5 in females and from 18.7 to 26 in males. The absolute numbers are small, of course—it is exceptionally rare for a young person to take their own life—but each one of these individuals represents a permanent and aching absence in a family, a friendship group, a community—a real person in distress who didn’t get the help they badly needed. Understanding what might account for these rises, as we do later in this book, is important work.
IMPACT OF A PANDEMIC
Everything in this chapter so far has taken us up to around 2018. But of course, since then, there has been a global, monumental change. Almost as soon as COVID-19 took hold, there was a wave of concern about how it might increase rates of mental illness and suicide. The full effects of the pandemic will not be known for some time, and the slow cycle of research will also delay our understanding, but the urgency of the situation has expedited things a bit: there’s been an enormous effort to collect and share data already. Perhaps unsurprisingly, this data indicates that COVID-19 is having a notable impact on our psychological well-being.
In October 2020, a group of UK researchers published a study about the mental health effects of the first month of lockdown. They were able to make use of an existing, ongoing study: the UK Household Longitudinal Study, which started in 2009 and regularly collected interview data from participants.26 In April 2020, over 17,000 participants aged sixteen to eighty completed an extra survey, relating to COVID-19. They answered questions about their financial situation over the last two months and also filled in the General Health Questionnaire-12 (GHQ-12). This is a frequently used measure of “mental health, distress, and well-being” that determines how often the respondent has experienced twelve particular symptoms over the past few weeks. The questions include “Have you recently lost much sleep over worry?,” “Have you recently been able to enjoy your normal day-to-day activities?,” and “Have you recently been feeling happy, all things considered?” For each question, there are four options (slightly tweaked depending on the question): Not at all (0 points), No more than usual (1), Rather more than usual (2), and Much more than usual (3). This is clearly not an in-depth measure of any specific disorder, but it is a quick broad-brush measure of distress that can be a screening tool for diagnosable disorders.
In the previous year, the group’s mean score on the GHQ-12 was 11.5 (out of a maximum score of 36); in April 2020 it was 12.6, a statistically significant increase. The researchers also calculated how many people were considered to have “clinically significant levels of mental distress.” To do this, they gave participants zero points for either of the first two answers (Not at all, No more than usual) and one point for either of the more affirmative answers (Rather more than usual, Much more than usual). This meant that, as a twelve-item questionnaire, the maximum score, indicating maximum distress, would be 12. The researchers used a threshold of 4 or above, as others have done, to indicate a clinically significant level of distress. Across the previous year, 18.9 percent of people met this threshold; in April 2020, it was 27.3 percent.
The director of the Institute for Fiscal Studies, Paul Johnson, tweeted about the findings from this study in June 2020, when they were shared as part of a working paper, writing: “The mental health consequences of lockdown have been severe.… This is a big, widespread and swift deterioration.” And yet, it would have been more surprising if we hadn’t seen increases in people’s distress and unhappiness around April 2020. The GHQ-12 questions include items about feeling under strain, feeling like you’re not playing a useful role in society, and feeling unable to concentrate. It’s understandable that more people said yes to those things right at the start of a global pandemic. When that data was collected, everything had suddenly changed and losses were recent and acute. These initial numbers need to be interpreted with caution: psychological distress in response to temporary stress and disruption is normal, and not necessarily an indicator of mental illness.
The trouble is that the lockdown of spring 2020 was not a one-off but the start of a prolonged period of disruption. Dirk Richter, who has been studying changing rates of mental illness for twenty years, says the long-term nature of the pandemic is critical. In October 2020, he said: “The data from earlier recessions, for example, very clearly say that the long-term fallout will be much more important, in terms of suicide rates, than what happened during the height of the distress. As long as there was no second wave, I was optimistic. But now I think things are really going down.”27
It was also evident almost immediately that some people have been hit far harder by the pandemic than others. In the above study, the increases in GHQ-12 scores were highest for eighteen-to-thirty-four-year-olds, women, and people living with young children. They were also higher for people who were employed before the pandemic, compared to those who were already unemployed—presumably because these people experienced a bigger shift in their daily life, including potential job losses.
For another study, over 12,000 adults were interviewed weekly for the first three weeks of lockdown (25 March to 14 April 2020).28 The participants were asked whether they had experienced any of ten “adversities.” Some were financial (e.g., whether the participant or their partner had lost their job), some were related to basic needs (e.g., whether they had lost their accommodation or were unable to access sufficient food), and some were directly related to the virus (e.g., whether the participant had COVID-19 or somebody close to them was hospitalized or had died). In week three, for example, 16.7 percent reported a major loss in household income and 3.5 percent reported the loss of someone close to them. Most pertinently, the relative risk of experiencing these adversities correlated with the participant’s socioeconomic position (SEP)—a combined measure of education level, household income, employment, and housing status. Those with the lowest SEP experienced the highest number of adverse events, and were especially likely to experience financial hardship and difficulty accessing basic needs (food, medication, and accommodation).
Copyright © 2021 by Lucy Foulkes