INTRODUCTION
How a Lifelong Obsession with Aging and Health Became My Career
· JOSH MITTELDORF ·
Fear of Death, and Fear of Fear of Death
I was three years old when my father told me that someday I was going to die. I was terrified. The thought that a few decades of life would be followed by an eternity of nothingness obsessed me, drove me frequently into a panic, and sent me crawling into my parents’ bed in the middle of the night. I know now that this kind of fear is not uncommon in young children, but attaching it to something so abstract and distant seems unusual.
I had immediate, firsthand experience with the crippling incoherence that naked fear could evoke. No one had to tell me that fear itself—not death but the fear of death—was a horrible, unbearable plague. But as a small child, I was silenced by shame and embarrassment. I presumed that susceptibility to fear was my peculiar weakness and that I was all alone in having to come to terms with it. I learned to distract myself and put thoughts of death out of my head. I told myself that someday I would face the specter of death, but for now, it was just too uncomfortable. The bargain that I made was that I would allow myself the luxury of distraction with the promise that I would return to this issue of mortality and sort it out when I was thirty-five. Yes, even as a preschooler, I had a certain affinity for numbers, and I thought thirty-five was safely tucked away in the remote future but still only halfway to the age my father told me I might expect to live.
As it turned out, I was off by about a decade. At age thirty-five, I was delightfully occupied with the adoption of my first daughter, but when I was forty-six, a confluence of inner readiness and outer events drew me into the contemplation of death. It began with scientific study, which led not only to other studies and this book but also to a more youthful body, renewed energy, better health, and a feeling of relaxation and empowerment in an area that had once paralyzed me with fear.
Cancer Concerns and Pollution Paranoia
I came of age in the 1960s, just as the term “natural” arrived on the scene. I thought then, as most people still believe, that staying healthy and improving my odds for a long life were the same thing. My ideas about how to stay healthy were to give my body all the things it needed: vitamins, minerals, and complete protein, plenty of rest, moderate exercise, and a low-stress lifestyle. I aimed to sleep nine hours a night for the same reason that I aimed for 120 grams of protein—that’s a pound of lean meat every day—because more is better.
I was afraid. Most of all, I feared cancer. Any tiny dose of radiation, any food additive or pesticide or pollutant in the air might trigger a carcinogenic mutation. I now think of cancer as a systemic disease, but at the time, my belief was that a single rogue cell, just one unlucky break, could spread to kill me at any time. That belief in itself was a recipe for paranoia. The idea that I was being poisoned by modern life supplied a target for my obsession. Air pollution made me nervous, and cigarette smoke drove me to distraction. This was the 1970s, and cigarettes were ubiquitous, even in California.
I was an astrophysics student at UC–Berkeley, using computer models to study the cosmos. Though I was a scientist by temperament as well as profession, it would be years before it occurred to me to look into the science of aging or even to learn what medical science had to say about the lifestyle correlates of longevity.
I ate crunchy granola and whole-grain bread. I tried nutritional yeast and lecithin and spirulina and became enthusiastic with each new health and longevity miracle I read about. Vegetarianism was still confined to a fringe of health nuts and Seventh-Day Adventists. When I began yoga in 1972, I think Berkeley was one of the few places in the country where you could find a weekly yoga class. Over the years, yoga would train me in sensitivity to my own body that provided an experiential knowledge stream that I now think of as complementary to clinical data.
One evening, about six months into my discovery of yoga, I was lying on the floor in savasana (deep relaxation—literally “corpse pose”) when the voice of my revered and beloved teacher suggested to the class that perhaps we might find our practice leading to eating less meat. I was startled awake and sat bolt upright. In previous weeks, she had suggested cutting back coffee and alcohol and TV and marijuana (this was Berkeley) and cigarettes—it all went down smoothly because I had never been attracted to any of those things. But what could she be thinking, lumping meat with intoxicants and mind-altering drugs? I had never questioned that a diet that was ultrahigh in protein might not keep me strong and healthy. The phrase “new age hokum” hadn’t been invented yet, but those are just the words for which my mind was fumbling.
Six weeks later, I was a vegetarian, and I have never looked back. My teacher’s hypnotic suggestion awakened my latent discomfort with the killing of animals. It had nothing to do with science. Now there is evidence linking low meat consumption with longevity, but I certainly didn’t know of any at that time.
It was 1982 when I made friends with Howie Frumkin (now dean of the University of Washington School of Public Health). Even at that time, fresh out of med school, his easy warmth and twinkling eyes coexisted naturally with a commanding intellect. I saw him in his office at the Hospital of the University of Pennsylvania and confessed that I had been losing sleep over worries about cancer for as long as I could remember. “Cancer is a disease of old age,” he told me. He sat me down and showed me the charts. With the exception of childhood leukemia, most forms of cancer were a very low risk for young people, rising steeply and peaking between the ages of seventy and ninety. This was completely new to me and very welcome to hear. I was relieved of an obsession.
Alfalfa and Aflatoxin
In the mid-1980s, Bruce Ames launched another seismic shift in my longevity program with a series of articles in Science magazine about natural pesticides. Ames had been studying carcinogens in the diet and had come to prominence with the invention of the Ames Test, a quick way to screen food additives for carcinogenic potential that has saved billions of dollars for the industry and obviated the slaughter of thousands of innocent rabbits.
I was the stereotypical Mr. Natural, based on my belief that pesticides and preservatives in my food were the biggest threats to my life and health. Along came Ames with a new story. It seems that humans didn’t invent insecticides. For as long as there have been beetles and grasshoppers on the planet, plants have been manufacturing chemical weapons to protect themselves. Some of these pesticides have been found to be carcinogenic in tests on mice and rats. But according to principles of the Food and Drug Administration (FDA), they cannot be banned or regulated or even labeled. They come under the category GRAS—“Generally Recognized as Safe.”
For many years after the Ames Bomb, I annoyed and inconvenienced my family (my wife was most patient) by refusing to eat black pepper, beets, alfalfa sprouts, peanut butter (aflatoxin), parsnips, potatoes (solanine), basil, celery, mustard, and spinach (oxalic acid). These items topped Ames’s ranking of hazards in the American diet, based on a combination of lab tests for carcinogenicity and prevalence in our foods.
Broccoli was on that list too … but how could I give up broccoli?
In the spring of 2014, a distant relative appeared from nowhere and e-mailed me with a family tree on the side of my paternal grandmother. She told me that Bruce Ames is my second cousin once removed. I was charmed. At eighty-five, Bruce still runs an active lab at UC–Berkeley, his eyes twinkle more than ever, and he continues to publish innovative research.
I will always have the deepest respect for Ames and his work, but I no longer place so much importance on his approach to toxins in the diet. A modest load of toxins in the diet is actually good for us, and we’re likely to live longer with the toxins than without. We’ll return to this counterintuitive idea—opening onto one of the main counterintuitive messages of this book—in chapter 6.
Epigenetics and an Epiphany
In January 1996, I read a Scientific American article about caloric restriction and life extension. Professor Richard Weindruch, a biologist from the University of Wisconsin, told of his research with animals that lived longer the less they were fed. It wasn’t just a quirk of a lab rat’s metabolism. The experiments included dogs and spiders, yeast cells and lizards, and now Weindruch was working with rhesus monkeys. They all lived longer on a starvation diet.
It was this revelation that began the shift in thinking that has carried me to my current understanding of aging, its evolutionary origin, and its deep relationship to health. Within a few days of reading this article, going for long walks in the park, scratching my head, I knew I had been battling the wrong enemy. Aging is an inside job, a process of self-destruction. I drew this message from the fact that the body is able to forestall aging when it is in extreme deprivation, desperately slashing its energy budget to conserve every calorie. This means that when food is plentiful, aging is avoidable, but the body is not trying to avoid it. Aging, it seemed, must be programmed into our genes.
You can call it a lucky guess, or perhaps it was a big-picture perspective that only an outsider in the field would have. The insight that aging is programmed into our genes has been at the center of my research ever since, and it is also the principal theme of this book. Unknown to me, there was other evidence for this insight even in 1996. There is much more such evidence today. Some of the genes that regulate aging have been discovered, and some of the epigenetic mechanisms that make aging happen have begun to come out. (Epigenetics is the science of how genes are turned on and off.)
This theoretical insight came with a bonus in the form of practical implications for self-care. All that good whole-wheat bread and organic tofu had begun to make its mark, and my midriff had the beginning of a spare tire for the first time in my life. I had been fortunate to have one of those metabolisms that let me pack away food without consequences, but now my weight was about ten pounds over what it had been in my twenties and thirties. I immediately began to cut back, to lose weight by brute willpower. It was harder than I’d thought it would be to lose the weight, but it felt great. I had so much energy that I took up running, and I did my first half marathon that fall. I also experienced, at last, a loosening of the grip in which fear of death had held me since childhood.
I was learning that I had been looking for longevity in all the wrong places. My emphasis on maximizing nutrition and minimizing toxins had been misguided. I had missed a great truth about keeping healthy, but more than this, I had misunderstood the nature of the enemy. All my thinking had been rooted in vague, unformed ideas about what aging was and how it worked. For me, science, health, and aging were beginning to come together for the first time.
The health message was surprising and disorienting, but there was another thread to this story that tickled my intellectual interest. I wondered about the caloric restriction effect (CR) and how it might have evolved. Every function of every cell and organ in our bodies has been shaped by a process of evolution and can only be understood in that context. How could life extension be an adaptive response to starvation?
Many different animal species respond to CR. This can only mean that there is some very general value in living longer when food is scarce. If evolution has produced this same expedient for so many different species, then it must have a purpose, and that purpose must be so general that it applies to yeast cells and to dogs.
But what could that purpose be? I guessed the reason why starving animals should have access to extra strength: it must be to help them survive through a famine. It was still unclear to me why aging would be programmed into our genes, but for some reason, nature prefers a fixed, predictable life span to a life span that is subject to the viccisitudes of chance. If natural selection was favoring a length of life that is not too long or too short, then whenever there’s a famine, it makes sense that aging would get out of the way, since so many lives are already being cut short by starvation. Conversely, aging has to take a big bite out of the life span in times of plenty; otherwise, there would be no room to expand the life span when conditions changed.
From an Age Before Spam
These ideas about caloric restriction and the evolutionary origin of aging were fascinating to me—indeed, the most intellectual excitement I had experienced since encountering the big ideas of cosmology a decade earlier. I put together an essay—concise, naïve, a bit self-important—and sent it to an e-mail list of about a thousand evolutionary biologists that I had found online. Now, this was a time when the World Wide Web was young and text-based. E-mail had begun to expand from government and universities to more general use, but there was as yet no spam. Can you remember a time when the Internet was pristine? There was a gentlemen’s agreement that, even though bulk e-mailing was essentially free, we would not permit the Internet to be polluted with unsolicited commercial messages. So my message wasn’t discarded or ignored.
I received about thirty replies, some of them very generous and solicitous. All of them told me that my thinking was in error, because evolution doesn’t work for the community but only for the individual. People took the time to explain to me that I was making a common error, one that other scientists had made in the past, but evolutionists had corrected their thinking in the 1970s. There is no such thing as “group selection.” Natural selection works solely for the benefit of the individual. They told me to read Adaptation and Natural Selection by George C. Williams.
So here was a genuine scientific mystery dropped into my lap. I rapidly began to take ownership of it and more gradually gave myself over to its exploration.
I had imagined that evolution might arrange for animals to hold some strength in reserve for hard times. How did this require “group selection”? And if it did, what was so unscientific about “group selection”? I had a lot to learn. I was not so arrogant as to doubt that these experts who had graciously answered my question might know something I didn’t know. But I was curious that none of them offered an alternative explanation for the evident paradox: that starving animals are able to live longer, healthier lives than those who have all the nutrition they need. I resolved to keep an open mind about whether my thinking had been flawed in the way that these experts suggested, or whether perhaps I was seeing something the experts had missed.
The Experts Were Right
As a scientist, I’m more of a thinker than a reader. Faced with a new problem, I’m inclined to go for a long walk and allow my thoughts to sift or to scratch equations in a notebook or even to try a stripped-down example represented by numbers in a spreadsheet. Compared with Googling the answer, this process is terribly inefficient. It also leads me astray, and I get things wrong at least as often as I get them right. I continue in this way first because there is no satisfaction so sweet for me as engaging with a scientific puzzle. I rationalize the inefficient use of time with the hope that trying lots of wrong ideas and following them to the end gives conviction to my knowledge and a depth to my understanding of how the world works.
But after many walks in the park, I realized the experts were right to say that if evolution has a preference for aging, this would have to be via group selection. If Darwinian competition were between individuals that live a longer time and individuals that live a shorter time, the ones that live a long time would leave more offspring, and their genes would come to crowd out the short-lived genes. This is not to say that aging cannot evolve—there is still the possibility that a community of individuals with a fixed life span is better adapted in some ways than a similar community where life spans are allowed to vary all over the map. For this advantage to become a factor in evolution would require competition of one community against another, and that is what the experts meant by “group selection.” (You don’t have to understand this now—I certainly took long enough to appreciate it. It will become clear bit by bit.)
But all the walks in the woods that I had enjoyed that summer could not elucidate for me, what was the objection to group selection? Why were the experts so convinced that group selection was not part of evolution’s toolbox? After all that thinking through things on my own, I finally read the book by Williams that so many evolutionists had recommended. I found it stimulating and thought provoking. It opened my eyes to a more concrete and disciplined way of thinking about evolution. But I still didn’t understand what was wrong with group selection. Could this be just a scientific prejudice among all these experts?
Old Darwin and New Darwin
I affiliated myself with the Biology Department at the University of Pennsylvania because it was convenient and close to home. I talked to professors, took courses in evolution, and read books to learn how evolutionary biologists think. I learned that for the last seventy years, the field has been dominated by a methodology known as “population genetics” or the “new synthesis,” or a third name—the one I’ll use throughout this book—“neo-Darwinism.” Neo-Darwinism is not the same thing as Darwinian evolution. Darwin was a naturalist, a student of the natural world who described what he saw and tried to integrate his observations with understanding. His thinking was (appropriately, I think) vague and even modestly self-contradictory at times—his enthusiastic contemporary Samuel Butler even compared it to the “twitching of a dog’s nose”—as he sniffed out the various ways that natural selection can work. Neo-Darwinism arose in the 1930s, and it was an attempt to make Darwinian theory more quantitative and rigorous. In fact, the field was founded by mathematicians who knew little about actual biology. As a physicist, I found myself immediately comfortable with the style and the methods of neo-Darwinism. It is straightforward and logically compelling. But the more I became immersed in the theory, the more I found that neo-Darwinism doesn’t work very well as a description of real life. Several big things about life in general just don’t add up in the context of neo-Darwinism: There’s aging and death—I’ll try to show you in the coming chapters why I don’t think you can account for the basic facts about aging within the framework of neo-Darwinism. But in addition, neo-Darwinism can’t account for sexual reproduction or for the structure of the genome that seems actually “designed” to make evolution possible; neo-Darwinism also does not have a place for the recently established phenomena of epigenetic inheritance or horizontal gene transfer.
* * *
One day in the Biomedical Library at Penn, I looked up a paper by one of the most respected theorists in evolutionary science, John Maynard Smith. The title of the paper was “Group Selection.” Perhaps Maynard Smith’s writing was more lucid than I had encountered before, or perhaps I was finally paying attention and giving the authorities their due. In one of those moments when the vase becomes two faces, I understood why the best theorists in the field had rejected the idea that natural selection could act on groups.
Evolutionary novelty depends on mutations that arise by chance. The mutation appears first in a single individual, and from there, it either spreads through the population or dies like the proverbial flash in the pan. Suppose a mutation arises that is bad for the success of the individual but would ultimately be good for the community if everyone should adopt the trait. A tendency toward cooperation is a good example. It does nothing for the individual and in fact will probably put the individual at a disadvantage if she is the only one cooperating while everyone else is taking advantage of her help but offering nothing in return. Of course, a cooperating community can be vastly more effective at group tasks than a community in which every individual is for himself alone. But how do we get there? The gene for cooperation started out in just one individual, and there is no reason to suppose it can spread to dominate the group. In fact, natural selection is working against it from the start. If such a gene can’t spread through the group—an uphill battle—then we’ll never find out whether it benefits the group or not.
I understood for the first time why the experts were dubious about group selection. I remember feeling queasy in my stomach as I bicycled home from the library.
A Worthy Scientific Puzzle
So I was not to have an easy victory or even the smug satisfaction of knowing that I had seen in an instant what evolutionary experts failed to see. But over the ensuing days, I came to realize that what I had was a genuine conundrum, a worthy scientific puzzle. It was just as clear to me as ever that evolution had chosen to cut life spans short, had installed aging in the genomes of the great majority of animal species. But I now appreciated the paradox: aging is bad for the individual, good for the community. How did the genes that control aging manage to spread and take over any community, a prerequisite for their communal advantage to become effective? How has aging managed to persist in the genome when any rogue individual might mutate away his aging genes and become new ruler of the roost? This was a puzzle with which I might engage deeply and find challenge and satisfaction. With some luck, I would be able to transfer the skills in mathematical modeling that had been so useful in my physics career and apply them in a new area.
That same year, I came across a feature in the Tuesday “Science” section of The New York Times about a professor at Binghamton University who had devoted his career to the study of group selection. It began,
DAVID SLOAN WILSON was a newly minted Ph.D. in his early 20s when he went to visit one of evolutionary biology’s leading theorists and tried the intellectual equivalent of selling atheism to the Pope.
“I just walked into his office and said, ‘I’m going to convince you about group selection,’” Dr. Wilson recalls. He failed. His target, George C. Williams of the State University of New York at Stony Brook, had made his reputation by effectively wiping that very idea off the intellectual map only a few years before, in a 1966 book, Adaptation and Natural Selection.
I wrote to David, and he was gracious enough to invite me to drive up and spend an afternoon with him at Binghamton. In the ensuing months, we collaborated closely together, and it felt like a homecoming for me. Over time, he introduced me to the friendly, cooperative left wing of the community of evolutionary biologists—those who study and advocate the process of group selection. I had a fascinating problem to investigate. I had a mentor. And I had a foot in the door.
Copyright © 2016 by Josh Mitteldorf and Dorion Sagan