1
SHAREHOLDER AND STAKEHOLDER CAPITALISM
Gabriella Corley was seven years old when the family pediatrician diagnosed her with Type 1 diabetes. Like 1.6 million Americans, her body did not produce enough insulin, the hormone responsible for maintaining proper glucose levels in our blood. For most of human history, the condition has been a death sentence—sooner or later an unregulated spike in blood sugar would have sent her into ketoacidosis, which would have led to a coma and eventually death. But luckily, Gabriella’s diagnosis came in 2014.
Nearly a century earlier, a trio of scientists at the University of Toronto discovered a method for extracting insulin from the pancreases of cows. In 1922, a fourteen-year-old boy named Leonard Thompson became their first patient. As Leonard lay dying in a bed at Toronto General Hospital, the scientists injected him with their insulin solution. Within hours, his blood sugar levels had returned to normal. Soon after, the trio visited one of the large wards where the hospitals kept children dying from ketoacidosis. They went from bed to bed injecting patients with insulin. By the time they reached the final patients, the first few had already started to wake from their comas, their families rejoicing around them. As a parent, I imagine it felt like witnessing a miracle.
Back then, most people with Type 1 diabetes died within two years of their diagnosis. Insulin gave them a new lease on life. Realizing the implications of their discovery, the three scientists—Frederick Banting, Charles Best, and James Collip—sold the patent for insulin to the University of Toronto. The price of their miracle drug: three Canadian dollars (roughly thirty-two 2020 US dollars), split three ways. As a reward for discovering insulin, Banting, Best, and Collip got to treat themselves to lunch.
Even their small sale was controversial—at the time, many considered it inappropriate for scientists and universities to patent medical innovations at all. The University of Toronto ultimately let pharmaceutical companies start manufacturing insulin royalty-free. In 1950, George W. Merck, president of Merck at the time, delivered a speech in which he famously said, “We try never to forget that medicine is for the people. It is not for the profits.” A century later, this mindset and approach is largely nonexistent.
Today, three different pharmaceutical companies sell insulin, which comes in the form of fast-acting or slow-acting formulas, through pumps or pens. Instead of using their market power to make the drug more affordable for diabetics, the companies have leveraged their clout to increase their profit margins.
Andrea Corley, Gabriella’s mom, works as an administrative assistant and her husband as a janitor, both for the public school district in Elkins, West Virginia, not far from where I grew up. Together, they make approximately $60,000 a year. Their health insurance is provided by the West Virginia Public Employees Insurance Agency. When Gabriella first got her diagnosis, the Corleys’ health plan covered all her supplies, Andrea told me. But after the first year, co-payments for Gabriella’s insulin prescription increased to around twenty-five dollars a month. The family joined a program that covered the payments as long as Gabriella regularly met with a pharmacist, but their insurance company capped the coverage at two years. As Andrea noted, Type 1 diabetes does not go away after two years. “It’s not something that she can fix. It’s not something she can reverse. She’s stuck with it the rest of her life.”
Then the Corleys discovered that Gabriella was allergic to an ingredient in her medication. She switched to a different brand of insulin, but the insurance provider informed the family that it would cover only 20 percent of the cost. That left the Corleys paying $300 per month out of their pocket, not including the cost of pumps and other supplies. Their doctor also recommended they keep an EpiPen at the house just in case Gabriella developed another serious allergic reaction. That added another $200 to the cost of keeping their daughter alive.
This scenario would be unimaginable in many parts of the world. Dozens of countries provide universal health care to their citizens. The quality of care varies widely, but everyone can get it. Among the world’s most developed countries, the vast majority of governments provide universal or near-universal health coverage to their citizens. Two countries—Switzerland and the Netherlands—offer universal coverage through a heavily regulated and subsidized market of nonprofit providers. In others, like Germany and Chile, a small portion of the population pays for private insurance while the rest are covered by government plans.
The US government provides health insurance for the elderly and the poor through the Medicare and Medicaid programs. The 2010 Affordable Care Act expanded Medicaid, but enrollees still needed to pay hundreds of dollars per year. While the developed world embraces health care as a human right, the United States instead relies on the market to care for many of its citizens. Approximately three in five Americans receive their health care through private insurance companies. As we will see, when decisions of life and death are left to the market, people do not always get the best results.
Eventually, the Corley family was able to secure affordable insulin and EpiPens only through a special program at their pharmacy, a program that kicked in after the family’s insurance dropped its coverage of the drugs. This gave them a narrow window of affordability within a convoluted system, but it could still close at any time. And even with the savings program, even with insurance, the Corley family still spends between $14,000 and $18,000 on health care every year. Andrea Corley said that even as they struggle to keep up, she fears for the future. Insulin prices are rising rapidly, and Gabriella, now twelve years old, may not be able to afford the medication when she grows up.
“I’m afraid that by the time she gets old enough to get her own, that she won’t even be able to,” Andrea said. Her fear is not unfounded. When adults lose coverage and cannot afford their own diabetes medication, they discover just how brutal the American health care system can get.
In 2017, Alec Smith decided to move out of his childhood home in Minneapolis and get his own apartment. He was almost twenty-six years old, the age when young adults in the US can no longer be covered by their parents’ health insurance. This transition would be complicated, considering he had been diagnosed with Type 1 diabetes two years earlier.
Alec had planned to become a paramedic, but with his disease, that was not an option. He took a job managing a restaurant—his new plan was to open his own sports bar. In the meantime, however, his job did not provide insurance. When Alec’s mother, Nicole Smith-Holt, started to research different health care plans available to her son, she was stunned. He would need to pay more than $400 per month, and insurance would kick in only after he had paid $8,000 out of his own pocket. Alec made less than $40,000 per year, meaning more than a third of his income would go to health care. Ultimately, Alec decided to forgo insurance and pay for his insulin out of pocket. Neither he nor his mother realized the cost of that choice.
More than 90 percent of the insulin market is controlled by three companies: Denmark’s Novo Nordisk, France’s Sanofi-Aventis, and Eli Lilly and Company in the United States. Despite the appearance of competition, numerous lawsuits have alleged that the trio operate as a cartel to keep the price of insulin artificially high. On more than a dozen occasions, the companies raised the price of their drugs in near lockstep. In 2001, a vial of insulin cost an average of $14. In 2019, that same vial cost $275. Across the United States, the scarcity of affordable insulin has had dramatic effects. Insulin thefts—from pharmacies or even people’s doorsteps—have been on the rise in recent years. And nearly a century after insulin’s inventors brought children back from the brink of death, data suggests that people are once again dying because they cannot access the drug. A Yale University study found that a quarter of diabetics in the United States used less insulin than they were prescribed due to its cost. Between 2017 and 2019, researchers found, thirteen people died after rationing their insulin.
One of them was Alec Smith. After talking with Alec’s girlfriend, the coroner, and the detective, his mother Nicole realized he had been holding out on buying his medication until his next paycheck. Initially, Nicole said she was angry at Alec for not asking his parents for help. She was also angry at herself for failing to recognize the warning signs. It was not until she went public with her story that she realized Alec’s experience was not unique. Other people began to write to her, describing loved ones in their midtwenties managing diabetes diagnoses, trying to support themselves for the first time as young adults and paying for their newfound independence with their lives.
Like Alec, Jesy Boyd was living in an apartment on his own for the first time while managing his Type 1 diabetes. Since Jesy was only twenty years old when he moved out, he was able to stay on his family’s insurance. Still, he paid for the cost of the medication with his own job as a restaurant manager. “He was trying to manage everything on his own,” said Jesy’s mother, Cindy Scherer Boyd.
Cindy realized Jesy was struggling to pay for his insulin in the spring of 2019. He had asked his parents to pick up his insulin prescription and bring it to his apartment. When they got there, they found Jesy incoherent. They took him to the hospital, where he was treated for a blood sugar spike and released. Jesy assured his parents that he would never let his supply get so low again.
But the following month, Cindy heard that Jesy had called in sick to work. She tried calling him, but he did not answer. Cindy hurried to his apartment with a friend, where they found Jesy dead. In his backpack was an application for an electrician job. In the obituary, the family wrote that Jesy had died from complications from Type 1 diabetes. Not long after, Cindy received a message from Nicole Smith-Holt, asking if Jesy had been rationing his insulin.
Nicole, whose previous political activism had been limited to voting, became an outspoken advocate following her son’s death, and she recruited other parents to join the fight. In 2019, she helped organize a demonstration outside Eli Lilly’s Indianapolis headquarters, protesting the high cost of insulin. Nicole stood in the middle of the street, blocking traffic and reading off the names of people who had died after rationing their insulin. Among them were her son and Jesy. She was eventually arrested.
Nicole and her husband, James, worked to get the Minnesota legislature to pass a law that limited insulin co-pays to thirty-five dollars if an uninsured patient’s supply was running low. The Alec Smith Emergency Insulin Act was passed in April 2020. It was a victory, but Nicole knows that state-level reforms will not fix the underlying system that lets medicine like insulin and EpiPens get so expensive.
“Pharmaceutical companies can get away with it because we don’t have any laws in place that restrict them or prohibit them from raising the price to whatever they feel the market will bear,” Nicole said. “When you’re presented with a pay-or-die situation, typically somebody is going to go with, ‘I’m going to pay it, and I’m going to do whatever I have to do to pay for it.’ Just because the companies can, they do.”
This point is worth stressing. The state of the insulin industry is a far cry from the three-dollar patent sale a century ago, or even the postwar era when George Merck could say “medicine is for the people … not for the profits.” The drastic contrast is representative of a broader change that swept through the entire business world in the last half century. The power and size of companies have skyrocketed since the mid-20th century, as has their influence on our everyday lives. This is true within the United States and the world as a whole. The world’s major companies have grown larger, more concentrated, and more profit-motivated, while governments and individuals have seen their power fade. The balance has been shaken so thoroughly that in recent years even business leaders, who are benefiting massively from the state of affairs, lament the power they have rapidly gained.
The last fifty years have witnessed a quiet reimagining of what companies are for and how they operate. But what brought us to this point?
* * *
THE HISTORY OF the social contract is a story of power and how it redistributes over time. Throughout history, the rights and responsibilities of capital, labor, and the state have been mostly determined by whichever group possesses the most power and is able to set the terms without overplaying its hand to the point of creating unrest or revolution. In the agricultural societies of the past, sovereign rulers exercised near-absolute authority over their lords and the peasantry and their economic lives. During the Industrial Revolution, the scales tipped toward the wealthy and politically connected owners of capital. In the early 20th century, American and European workers reined in the power of corporations through labor unions and the ballot box. Today, power has concentrated in the private sector yet again.
What caused this shift? How have corporations amassed so much size and power over the last half century? If you want to understand the trigger point, you can look to a single idea, a single sentence even.
After the calamities of the Great Depression and World War II, the economy began to boom throughout the United States and Europe. But strong checks were placed on it by both organized labor and government regulators, who were all too familiar with the cost of monopolies and stock-market crises. In this context, the vast majority of businesses saw themselves as fitting within a clear niche in society. Companies were expected to turn a profit while also working to improve the well-being of their employees, support the communities where they did business, and generally serve the public good.
Yet not everyone thought this model was sensible. In 1962, in his book Capitalism and Freedom, economist Milton Friedman wrote, “There is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game.”
This idea marked a dramatic departure from the existing social contract—a world where a lifesaving patent was sold for a few dollars and where George Merck could speak of profit as a secondary motive for doing business. But in Friedman’s eyes, such decisions were inefficiencies, flaws in the market. And according to the theories of Friedman and his colleagues at the University of Chicago, the world would benefit several times over if every individual pursued maximum profit and then reinvested the gains. In Friedman’s view, a company’s only loyalty was to its shareholders, and any leadership decision that kept a dollar out of shareholders’ pockets was mismanagement. In time, this profit-optimizing philosophy would come to be known as shareholder primacy.
The idea did not catch on right away. But it began resonating with a core group of supporters in the 1970s, when the booming postwar economy began to stagnate. Economists pointed to government regulation and inefficient management as the problems, and discontent opened the door for Friedman’s ideas to circulate. Then, in the 1980s, his philosophy hit the mainstream. Shareholder primacy melded perfectly with the Reagan and Thatcher eras, providing an intellectual cornerstone for deregulation and trickle-down economics. Soon, clear opposition to the New Deal–era checks on corporate power emerged. These critics argued that government had kept a lid on business for too long: managers of big businesses had grown complacent and had stopped driving profits, and the whole economy was stagnating as a result. If companies were turbocharged to maximize profits, it would jolt the whole country and the whole world into growth. To get there, all you had to do was prioritize profit. A pithier version of Friedman’s big idea soon swept through the culture, expressed by Gordon Gekko in Oliver Stone’s Wall Street: “Greed is good.”
The effect of shareholder primacy was to drive a stark line between a company’s shareholders and its stakeholders, defined as every other party affected by its business and including its employees, its community, its country, its customers, and the environment. Under the new model, shareholder profits came first and foremost, and any significant investment in other stakeholders became a liability.
The mid-1980s saw the rise of hostile takeovers and corporate raiders, who were in many respects the vanguard of shareholder primacy. They would identify distressed or stagnant companies and buy up equity until they gained control. Then they would reorganize every division toward maximizing returns to shareholders, rooting out any inefficiencies they could find. This often meant making job cuts, relocating headquarters, selling off real estate, and taking on loads of debt—using any tool in the arsenal for channeling assets toward generating a short-term return on investment. The prospect of a hostile takeover struck fear into the hearts of corporate leaders. And a series of legal decisions made it increasingly difficult to resist hostile takeovers, especially in cases where shareholders stood to turn a profit, which ultimately forced many companies to shift toward shareholder primacy just to avoid being a target for a sudden takeover bid.
Throughout the 1980s, mergers and acquisitions swept through the world’s largest economies, aided by increasingly laissez-faire attitudes toward regulation and antitrust. The United States had been the world leader in antitrust, standing firmly against monopolies since the early 20th century, when it had launched hundreds of lawsuits against large corporations to break up Gilded Age monopolies like Standard Oil and U.S. Steel. In the aftermath of World War II, policy makers established a new wave of antitrust when the world saw how monopolies contributed to the rise of authoritarian regimes in Japan, Italy, and the Third Reich. During the drafting of new constitutions and laws around the world, the United States often encouraged (and, in the case of Japan, imposed) tough new antitrust laws. But in the 1970s, a new school of thought came to dominate the debate around monopoly and competition.
The newer theory, popularized by judge and legal scholar Robert Bork, held that economic concentration was a bad thing only if there was demonstrable harm in the form of higher prices to consumers. As long as monopolies charged fair prices, they were perfectly acceptable. Bork’s narrow interpretation of antitrust law aligned perfectly with the Friedman doctrine and the political atmosphere of the 1980s. Government watchdogs began to bring fewer suits against large companies, and Bork’s theory became the prevailing basis for the government’s approach to competition. All the while, the private sector grew steadily more consolidated through mergers and acquisitions. Like many people, I went through the experience of the local bank where I had an account being purchased by a larger regional bank, which was then purchased by a national bank, which then merged with another national bank. This happened across sectors of the economy.
In this same period, as we will explore further in the next chapter, companies also began to recognize the benefits of expanding their influence in Washington. Under Friedman’s philosophy, businesses could maximize their profits as long as they followed the letter of the law. But through lobbying and unlimited political donations, companies could remake the boundaries by shaping the laws themselves, gaining outsized returns for relatively modest allocations of capital.
Each of these trends amplified the next, and the result has been a rapid expansion of corporate size and power since the 1970s. Shareholder primacy unleashed the ugliest face of capitalism. In theory, profit in the hands of shareholders would lead to the benefit of all, by increasing the overall efficiency of the economy and freeing up excess capital that would be reinvested in communities.
But in practice, it squeezed out other stakeholders like employees, local communities, and the environment. When the economy was booming in the decades after World War II, just about every medium-sized or larger city had a major corporate headquarters. Companies’ executives sat on the local boards. They supported everything from after-school programs to local arts and sports programs. The children of the CEO went to the same schools as the children of middle managers. If a downturn hit, the company did not lay off employees as soon as a consultant or MBA determined it was balance-sheet optimal; they waited until the last possible moment because the company and community were inextricably tied, and each felt a responsibility to the other for the long term. This is how the social contract worked in a hyper-local context.
But once shareholder primacy emerged, the thinking changed. The 1980s wave of mergers and acquisitions led to widespread layoffs in smaller cities, the uprooting of corporate headquarters to tax-optimized locations, and whole local economies spiraling into freefall. I saw this where I grew up in Charleston, West Virginia, when all its banks, mining companies, and chemical companies were swallowed up by companies on the coasts. Shareholder capitalism meant that if an economic benefit was projected by moving the company’s headquarters, a company that had spent decades growing alongside its community would now up and leave, or at least change where it was nominally headquartered and paying taxes. In the United States, this resulted in two-thirds of all job growth taking place in just twenty-five cities and counties; the same pattern took place in Europe.
* * *
FAST FORWARD TO the present, and we have seen the first part of Friedman’s vision pan out: the world’s largest companies have posted remarkable profits, and shareholders have seen enormous returns. But the second part—the promise that these profits would come back around and benefit everyone—never arrived. We have seen all the growth in the last few decades go to senior executives and shareholders, not to workers. We have seen money drain out of individual communities where robust local economies had existed, routing instead to financial hubs and shareholders. We have seen major centralization and—instead of the healthy competition promised—a new age of monopoly.
Copyright © 2021 by Alec Ross