Introduction
This is a book about the history of ideas in a place that likes to pretend its ideas don’t have any history. The tech industry is largely disinterested in the kinds of questions this book raises; tech companies simply create a product and then look to market it. Mark Zuckerberg put it as follows: “I hadn’t been very good about communicating that we were trying to go for this mission. We just showed up every day and kind of did what we thought was the right next thing to do.” The mission, the big question, became important only later. Only in hindsight did he have to ask himself: How do I explain this to journalists? The U.S. House of Representatives? Myself? At the same time, Zuckerberg’s quote is meant to imply that there had been such a mission all along, that showing up every day and working on a good, monetizable product was never all Facebook was about. What Tech Calls Thinking concerns where tech entrepreneurs and the press outlets that adore them look once they reach the point at which they need to contextualize what they’re doing—when their narrative has to fit into a broader story about the world in which we all live and work.
As Silicon Valley reshapes the world, journalists, academics, and activists are spending more time scrutinizing the high-minded ideals by which companies like Google and Facebook claim to be guided. As the journalist Franklin Foer put it, Silicon Valley companies “have a set of ideals, but they also have a business model. They end up reconfiguring your ideals in order to justify their business model.” This book asks where companies’ ideals come from. The question is far from a sideshow: It concerns how the changes Silicon Valley brings about are made plausible and made to seem inevitable. It concerns the way those involved in the tech industry understand their projects and the industry’s relationship to the wider world. It isn’t so much about the words that people in Silicon Valley use to describe their day-to-day business—interesting books could be and have been written about the thinking contained in terms like “user,” “platform,” or “design.” Rather, it is about what the tech world thinks it’s doing when it looks beyond its day-to-day business—the part about changing the world, about disrupting X or liberating Y. The stuff about Tahrir Square protests and $27 donations. What ideas begin to track then? And what is their provenance?
Indeed, the very fact that these ideas have histories matters. Silicon Valley is good at “reframing” questions, problems, and solutions, as the jargon of “design thinking” puts it. And it is often deeply unclear what the relationship is between the “reframed” versions and the original ones. It’s easy to come away with the sense that the original way of stating the problem is made irrelevant by the reframing. That perhaps even the original problem is made irrelevant. Some of this is probably inherent in technological change: it’s hard to remember the history of something that changes how memory works, after all. In the 1960s, the communication theorist Marshall McLuhan (1911–1980) proposed that “the effects of technology do not occur at the level of opinions or concepts, but alter sense ratios or patterns of perception steadily and without any resistance.”
But clearly that’s only part of the story. To some extent, the amnesia around the concepts that tech companies draw on to make public policy (without admitting that they are doing so) is by design. Fetishizing the novelty of the problem (or at least its “framing”) deprives the public of the analytic tools it has previously brought to bear on similar problems. Granted, quite frequently these technologies are truly novel—but the companies that pioneer them use that novelty to suggest that traditional categories of understanding don’t do them justice, when in fact standard analytic tools largely apply just fine. But this practice tends to disenfranchise all of the people with a long tradition of analyzing these problems—whether they’re experts, activists, academics, union organizers, journalists, or politicians.
Consider how much mileage the tech industry has gotten out of its technological determinism. The industry likes to imbue the changes it yields with the character of natural law: If I or my team don’t do this, someone else will. Such determinism influences how students pick what companies to work for; it influences what work they’re willing to do there. Or consider how important words like “disruption” and “innovation” are to the sway the tech industry holds over our collective imagination. How they sweep aside certain parts of the status quo but leave other parts mysteriously untouched. How they implicitly cast you as a stick-in-the-mud if you ask how much revolution someone is capable of when that person represents billions in venture capital investment.
This is where the limits of our thinking very quickly become the limits of our politics. What if what goes by the name of innovation is ultimately just an opportunistic exploitation of regulatory gaps? And before we blame those gaps, keep in mind that regulation is supposed to be slow-moving, deliberate, a little bit after-the-fact. A lot of tech companies make their home between the moment some new way to make money is discovered and the moment some government entity gets around to deciding if it’s actually legal. In fact, they frequently plonk down their headquarters there.
Take Uber and Lyft, for example. The two ride-share giants are in many ways more agile and cheaper for the consumer than the taxi services they’re slowly destroying, and these companies are accordingly popular with large investment funds, for one primary reason: their drivers are independent contractors who have no bargaining power, no benefits, and very few legal protections. Everything these companies do—from the rewards programs they set up for their contractors to the way the algorithms that assign rides to drivers seem to punish casual driving—is actually designed to nudge their drivers inch by inch toward a full-time employment that they aren’t allowed to call full-time employment. The moment this state of affairs is recognized, all kinds of rules will apply to these companies, making them even more unprofitable and likely putting them out of business. But until such a moment, the companies will explain to you ad nauseam how they’re different and new and how you are missing the point when you apply established categories to them.
This book is about concepts and ideas that pretend to be novel but that are actually old motifs playing dress-up in a hoodie. The rhetoric of Silicon Valley may seem unprecedented, but in truth it is steeped in some pretty long-standing American traditions—from the tent revival to the infomercial, from predestination to self-help. The point of concepts in general is to help us make distinctions that matter, but the concepts I discuss in the chapters that follow frequently serve to obscure such distinctions. The inverse can also be true: some of the concepts in this book aim to create distinctions where there are none. Again and again we’ll come across two phenomena that to the untrained eye look identical, but a whole propaganda industry exists to tell us they are not. Taxi company loses money; Uber loses money—apparently not the same. The tech industry ideas portrayed in this book are not wrong, but they allow the rich and powerful to make distinctions without difference, and elide differences that are politically important to recognize. They aren’t dangerous ideas in themselves. Their danger lies in the fact that they will probably lead to bad thinking.
* * *
In the following chapters, I will try to show not only how certain ideas permeate the world of the tech industry, but also how that industry represents itself to a press hungry for tech heroes and villains, for spectacular stories in what is ultimately a pretty unspectacular industry. A study like this one almost by necessity has to foreground the highly visible founders, funders, and thought leaders. To find out how ordinary coders or designers think, to say nothing of all the folks making up the tech industry who aren’t customarily thought of as belonging to that industry, is a very interesting project in its own right, but it isn’t the project of this book. For better and for worse, the media has a fixation on tech thought leaders. It seems to need certain figures to be able to spin its narrative. Peter Thiel, Elon Musk, Steve Jobs, and others like them knew how to manipulate that—something they learned from another California global export: 1960s counterculture.
Unfortunately, my own spotlighting of these leaders means this book risks recapitulating one of the central misperceptions of the tech industry; it’s anything but clear whether figures like Mark Zuckerberg, Elon Musk, or even Steve Jobs really embody the way the tech sector understands itself. But what is clear is that they represent the way the tech sector has communicated with the outside world. They are easy identificatory figures when one is dealing with an industry that can be disturbingly amorphous and decentralized. (This is, after all, how the pars pro toto “Silicon Valley” has functioned in general.) They are creatures of the media, inviting us to project our fears, giving shape to our hopes. Most important, they encourage us to think that someone, whether charismatic or nefarious, knows where the journey is going. Visibility in the press is not, of course, the same as representativeness. Making a Theranos movie is not cool. You know what’s cool? Making an Elizabeth Holmes movie.
Giving these ideas’ history back is central to any attempt to interrogate the claims the tech industry makes about itself. But there’s another question that we can ask once we’ve figured out where these ideas come from: Why were these ideas convenient to adapt, and why was it convenient to forget their history? The story of these ideas intersects with the great transformations that information technology has undergone in the last seventy years. Coding went from being clerical busywork done by women to a well-paid profession dominated by men. In recent years, competencies around technology went from highly specialized to broadly distributed, to the point where “learn to code” has become a panacea for any and all of the ravages of capitalism.
And the environment around tech has changed: the government went from basically owning the tech industry to struggling to regulate it; computer science went from an exotic field seeking to establish itself to one of the most popular university majors. The cultural visibility of the sector and its practices has transformed even since the film The Social Network came out in 2010. Perhaps Foer had it only half right: When the companies of Silicon Valley reconfigure your ideals, it’s not just in order to sustain their business model. It’s also to avoid cognitive dissonance in their thinking about gender, race, class, history, and capitalism.
Many of the ideas traced in this book had analogous trajectories. For one thing, they emerged from a similar era. They were new ideas when given definitive shape in the sixties, frequently by the counterculture. They attained their shape outside of the university, though they were always on the periphery of it. As the management-science scholar Stephen Adams has pointed out, a lot of the institutions of learning and research featured in this book grew out of a desire to stanch a persistent brain drain of bright young people moving from the West Coast eastward. Around these institutions sprang up a network of highly educated but also highly idiosyncratic thinkers bent on shaking up the system. They were the ones who injected these ideas into the emerging discourses around a burgeoning industry.
The early fate of these ideas was bound up with institutions that had little to do with commerce: from research centers to hippie retreats, from universities to communes. The fact that the people interested in these ideas made a lot of money was almost beside the point: they founded companies because they thought of them as spontaneous, communal correctives to the overly stolid institutions of government and the university. But before long, shibboleths like “communication” and “big data” circulated less and less because of their cultural cachet and more and more because of the vagaries of the business cycle. What hasn’t changed: formal education seeming secondary to these ideas—but where previously that had meant dropping out to pursue niche projects, it soon came to mean dropping out to make lots of money. The ideas that tech calls thinking were developed and refined in the making of money.
And what tech calls thinking may be undergoing a further shift. Fred Turner, a professor of communication at Stanford, traced the intellectual origins of Silicon Valley in his book From Counterculture to Cyberculture (2006). The generation Turner covered in that book came of age in the sixties, and if they made money in the Valley, they’re playing tennis in Woodside now; if they taught, they are mostly retiring. The ethos is changing. “As little as ten years ago,” Turner told me, “the look for a programmer was still long hair, potbelly, Gryffindor T-shirt. I don’t see that as much anymore.”
The generation of thinkers and innovators Turner wrote about still read entire books of philosophy; they had Ph.D.s; they had gotten interested in computers because computers allowed them to ask big questions that previously had been impossible to ask, let alone answer. Eric Roberts is of that generation. He got his Ph.D. in 1980 and taught at Wellesley before coming to Stanford. He shaped into the form they take today two of the courses that together are the gateway to Stanford’s computer science major. CS 106A, Programming Methodologies, and 106B, Programming Abstractions, are a rite of passage for Stanford students; almost all students, whether they are computer science majors or not, enroll in one or the other during their time at the university. Roberts’s other course was CS 181, Computers, Ethics, and Public Policy. Back in the day, CS 181 was a small writing class that prepared computer scientists for the ethical ramifications of their inventions. Today it is a massive class, capped at a hundred students, that has become one more thing hundreds of majors check off their lists before they graduate. Eric Roberts left Stanford in 2015, and today teaches much smaller classes at Reed College in Portland.
As Roberts tells it, the real change happened in 2008, though “it almost happened in the eighties, it almost happened in the nineties.” During those tech booms, the number of computer science majors exploded, to the point where the faculty had trouble teaching enough classes for them. “But then,” Roberts says, “the dot-com bust probably saved us.” The number of majors declined precipitously when after the bubble burst media reports were full of laid-off dot-com employees. Most of those employees were back to making good money again by 2002, but the myth of precariousness persisted—until the Great Recession, that is, which was when what Roberts calls the “get-rich-quick crowd” was forced out of investment banking and started looking back at the ship they had prematurely jumped from in 2001. When venture capital got burned in the real estate market and in finance after 2008, for instance, it came west, ready to latch on to something new. The tech industry we know today is what happens when certain received notions meet with a massive amount of cash with nowhere else to go.
* * *
David M. Kelley, a Stanford professor and the founder of the design company IDEO, is one of the apostles behind design thinking. He has shaped the way Silicon Valley has presented and marketed itself since at least the 1980s. He is a founder of the Hasso Plattner Institute of Design at Stanford, also known as the “d.school,” and has been a fixture at TED Talks and developers’ conferences. In one TED Talk back in 2002, Kelley gave a series of examples of how design thinking was changing the tech industry—and an inadvertent example of what tech calls thinking. For a long time, Kelley told his audience, tech companies were “focused on products or objects.” But in recent years, “we’ve kind of climbed Maslow’s hierarchy a little bit,” focusing more on “human-centeredness” in design.
But why mention Maslow’s hierarchy? Maslow’s famous model tried to explain how certain human needs can emerge and be satisfied only after other, more fundamental needs are met. The idea Kelley is describing, by contrast, is indeed one that many philosophers—the entire school of phenomenology, for one—have wrestled with. But Maslow, specifically, did not. In context, all Kelley seems to be saying is that designers used to think about objects in one way, and now they have begun thinking about them in another, more complex way, because they now design “behaviors and personality into products.” They have recognized that how people relate to objects is more complicated than they once supposed. So far, so good. But why invoke the psychologist Abraham Maslow (1908–1970) to make that point?
This is where we start getting a sense for what tech calls thinking. Kelley doesn’t say, “The philosopher Martin Heidegger proposed that human subjectivity can be understood only as a mode of being-in-the-world,” or anything like that. He does not go for a piece of philosophy that is apropos but that might alienate the audience at a TED Talk. He adduces a bit of pop psychology that has become a kind of byword since Maslow came up with it in 1943. And the way he brings Maslow up seems to matter too: Kelley doesn’t stop to cite or to explain in detail; a quick, ornamental wave of the hand is enough. Many of the ideas in this book function like this—they are held in common, broadly shared and easily pointed to, even if no one takes the time to figure out where they come from or whether they are correctly applied. Many ideas like this are held by people who don’t actually subscribe to the philosophy from which they come—or do subscribe to it and don’t realize it.
Another thing made Maslow’s hierarchy a convenient shorthand in a TED Talk: it’s an idea with strong regional ties. Maslow spent some of the last years of his life in California. He became important at Esalen, the New Age retreat along the Pacific Coast Highway; he worked for a private foundation in Menlo Park, just up El Camino Real from Stanford. One thing that surprised me in writing this book is just how local these kinds of ideas are. There are thinkers in this book who, had they not relocated to the Bay Area, or, in the case of Maslow, literally pulled into the driveway of the Esalen Institute, surely wouldn’t be looming nearly as large in the reservoir of tech’s received ideas. There may be some local pride at work in Kelley’s mentioning Maslow. There may be a sense of genealogy, a line of tradition being drawn from New Age psychotherapy and leftist intentional communities to the TED Talk.
Still, the localism is pretty remarkable, given that one of the great achievements of this industry has been to open up the world in hitherto-unimaginable ways. But it is a local story. The tech industry recruits from specific milieus, nations, schools, social classes, and so forth. The age spread, especially at the smaller and fast-growing companies, can be extremely limited, and many of the older figures these companies interact with (the venture capitalists and lawyers, for example) are basically them five years older. Silicon Valley loves the words “everyone,” “universal,” and “people,” but what they usually mean is “people I went to school with,” “my housemates in East Palo Alto,” or “my four immediate subordinates.” The universality that their business model pushes them toward exists in tension with the fact that they actually know very few people.
It’s also characteristic that, even though he teaches at Stanford, Kelley didn’t invoke a university professor. Maslow was an academic, but he worked at a private research institution in the Valley. What tech calls thinking is done largely outside, but within shouting distance of, the university. One of the more famous protagonists of tech’s love-hate relationship to academia is Peter Thiel, who made a fortune by working at PayPal and investing in companies like Facebook, and who is famously wary of higher education. The Thiel Fellowships pay young people not to go to college, and Thiel publicly asserts that he thinks the university is a bubble—but he nevertheless spent almost a decade at Stanford, where he received both a bachelor’s degree and a law degree, and, when he visits, is a welcome presence at the Faculty Club. Elon Musk likes to portray himself as having an autodidact’s mind, and, indeed, he dropped out of a Ph.D. program at Stanford—but he too spent a lot of time at universities, in both Canada and the United States. The ideas in this book are university-adjacent, academish. They cannot free themselves of the institution any more than they can be made fully at home there. And the mode by which they are best acquired is the subject of the first chapter: dropping out.
Copyright © 2020 by Adrian Daub