1
Relative to the Observer Who Is Also a Liar
In the mid-1990s, Anna, age sixteen, had brain surgery to alleviate suffering from epilepsy. Because the brain does not have pain receptors,1 she was allowed to remain awake the whole time, with only a tiny amount of localized anesthetic to numb her scalp. During the procedure, the doctors and attendants asked Anna a series of questions meant to keep her talking and, as they probed her brain with electricity and micron-thin blades, hoped that she would not stop.
Though the language parts of her brain were roughly in the same places as they are in all other human brains, the brain moves with each pulse of blood, and every brain, like every coastline, has its own slightly different contours. If Anna had stopped speaking at a certain spot of stimulation—because the electrical current could activate cells relevant to thought and speech—the surgery team knew they were in an area important to language and thus one to be avoided by the surgeon’s scalpel.
Oddly, they did not ask Anna while her brain was being probed with electricity to write poetry and stop when the poetry became bad. They did not ask her to intuit a response to a fictional domestic dispute and stop when her response became improbable or immoral. They did not ask her how far in front of her eyes her visual imagination extended and stop when the distance became too uncomfortably far or the description too Daliesque or she suddenly lost perspective. Instead, they asked her to do a variety of seemingly mundane tasks, including name objects, read, count, and flex her hands and toes.
Many imagine the human brain as a series of lit-up wires connected together like telephone poles, strung inside a snow globe made of bone, each neuron brightening like a candle when it has something big to say or wants its owner to notice something. It is not. There is no light, for one. The brain is messy and venous and dense and soaking wet, all the time, and is about as heavy as a hardback copy of Infinite Jest.2 It is not designed, perfected, or neat. It is a thrift-store bin of evolutionary hacks Russian-dolled into a watery, salty piñata we call a head.
If the surgeons had poked Anna’s brain with the tips of their fingers, which they would never do, but surely has been done, her brain would have had the give of a very soft Brie cheese. The surgeons could thread a small, loose wire straight through it, but never would because along those tracks could be memory, identity, and bits and pieces of the girl’s sense of her teenage self, which is to say the accumulation of her preferences. Hanging like a furled sail off Theseus’s ship,3 a surprisingly tensile outer shell called the dura mater (Latin, and Freudian, for “tough mother”) would be visible near the girl’s head, as seen by those in the room, though not by Anna, who just wants her seizures to stop.
The surgeons eventually found a spot on her brain while operating that, when stimulated, caused her to laugh, a discomfiting sound in any operating theater. More technically, the surgeons used their fancy electric wand to produce an electrical current that, because the brain also uses electricity to communicate many of its own messages, caused certain neurons in her cortex to send a signal mostly indistinguishable from the natural one to parts of her muscles to coordinate action of these muscles, and it was these contractions that bounced air between them to produce a sound perceived by those others in the room as “laughter.”
Strangely, when asked the source of her laughter, Anna gave a different answer each time. The answer was dependent on her immediate surroundings, and often involved an aspect of a picture she was viewing or a person near her (“the horse is funny,” “you guys are just so funny…standing around”), even though the correct answer, involving the surgeon’s electrode, eluded her. Instead, she confabulated the reasons behind the laughter and mirth because the brain abhors a story vacuum and because the mammalian brain is a pattern-recognizing monster, a briny sac full of trillions of coincidence detectors that are only useful if there are connections between things. Even a wrong pattern, a guess, is at least a pattern to learn against.
Though she did not receive general anesthesia, those before and after her in the same operating room did, and a full description of her awake brain must also explain one of the most remarkable things about consciousness. It can be silenced by anesthesia, in part or in whole, only to recover fully again in a few seconds, minutes, hours, or days. That there is no one kind of anesthesia to turn the consciousness dial to zero means that, although all conscious brains may be alike, the conscious brain has many, and sensitive, failure modes. Consciousness, like Tolstoy’s unhappy family, has only one way of adding up to a whole, but many ways of falling apart: xenon gas, propofol, isoflurane, cocaine, nitrous oxide, barbiturates, benzodiazepines, and ketamine, each with different chemical profiles and causes, can all silence consciousness.
Some anesthetics, like propofol, which is sometimes called the “milk of anesthesia” because it is white and oily and repels water, can cause bizarre effects. People who were crying before anesthesia came out of it, hours later, again (still?) crying. These effects are less like a pause button for consciousness and more like a needle lifted off a spinning record.4 When waking up from anesthesia, or coma, a state sometimes called post-trauma amnesia, people will often have strange, lewd, or primal behaviors, speech, or urges. Legally and socially, people are not often held responsible for what they do or say during this time, which makes one wonder why we ever are. Consciousness is all and every one of these states equally. Any good theory must fully account for each of them.
That consciousness disappears nightly is another of its quirks.5 Though nobody quite understands what sleep is, we know what it looks like and that anesthesia does not induce it. Some animals can sleep with only half their brains at a time, allowing basic functions of consciousness to persist so that they don’t fall out of the sky or get eaten. Those in the water who don’t fear being eaten, like humpback whales, often sleep vertically, often in groups, like the large towers of an aquatic city, for less than ten percent of their day. Sleep concerns are highly specific: birds dream of bird problems, whales of whale problems, dogs of dog problems.
Those that can lucid dream, which is a kind of awakeness within dreaming, or an awareness that one is dreaming, can be trained to move their eyes while in the lucid state, under rapid eye movement, and these movements under the eyelids can be detected by an infrared camera.6 Interestingly, this means one can make a code, like those in a video game or medieval monastery, that lets one break the subjective fourth wall and communicate with the great sleep researchers in the sky. For example, a person can learn to, if lucid, move their eyes in a certain pattern and then count to ten, after which they move their eyes in that same pattern again, to mark the end of their test. Remarkably, some people take around ten “objective” seconds to do so, which implies that their subjective, incepted time—the waking dream within the dream—not only has a time keeping device but that it may be the same one we always use.
Where did this conscious, clock-making observer, lucid or otherwise, who can wake up and take in their immediate surroundings—who, in physical terms, the speed of light sticks to, who only in observation can collapse a wave of light into a position and determine whether Schrödinger’s cat is dead or alive—come from?
From the oceans, of course. The most useful thing that land offers that the oceans do not is the large distance at which things can be detected. Visually, swimming in water is like driving in a milky fog, which reduces the range of even the best mammalian eyes. A small bacteria can sense, crudely, in a small shape around itself—we can call this “sight” if it can respond to a light source, or “smell” if it can detect an unwanted chemical toxin nearby—and the total of its sensory range, the full addition of all its input, stretching through and combined across all its sense, is called its sensorium. The experience of any underwater creature paying attention is an experience of underwater objects or other creatures popping into frame at such high speed that an underwater sensorium needs timing closer to reflexes than contemplation. Even in the clearest water, light scatters and degrades over only a few meters, which means there isn’t much need for a brain to come up with long-term planning, because what would be the use?
Thus there isn’t much need for clock making beyond intervals of a few seconds, which means that there is no need for the brain to whir up an emergency motor-response plan for the shark cresting over the horizon of the Adriatic shelf, because there is no way to sense the shark cresting over the horizon of the Adriatic shelf. This usefully constrains the metabolism needed to keep track of the far outer radii of the outside environment and means any need to plan movements is limited to the timing of events with a small, near-reflexive range.
Hiding in the center of these plans is the observer, the conscious creature, who is just an accumulation of movement preferences and plans trapped inside a sensorium, keeping track of what it thinks the objects around it are and what it might otherwise do with itself.
The great move of life onto land from the milky oceans changed the range of timing that the newly landed needed to care about. The expanded range of being able to see farther through the crisp air, and with new eyes to boot, meant that prey vis-à-vis their predators had to plan to move themselves across alternately sparse and cluttered landscapes in order to find and not be food. To plan, one needed a sense of time, in order for there to be something unto which the plans unfurled; for there to be a sense of time, there had to be a timekeeper. Thus the timing that mattered most for sensing, predicting, and planning depended on a creature’s sensorium but the exigencies of land, like gravity and tripping, made it suddenly necessary to plan seconds and minutes ahead.
In a kind of spherical symmetry, minutes of forward planning (imagination) required minutes of backward recall (memory) and, like an inflating balloon expanding evenly and temporally on all sides, landed creatures needed to be able to pay attention to the future and in so doing pay attention to the past by the same amount. To know what a lion cresting the African horizon will do, one must be able to keep track of what similar-looking creatures once did after cresting similar-looking hills.
Mammals like dolphins or whales, which crawled back into water after a brief stint as hippos, saddened by how murky it was, used all the vocal tricks learned on land to re-create through echolocation and sonar, as much as possible, the range of visual distance the eyes granted.7 Because some sound waves travel through parts of the ocean almost as far as light waves travel through air, and because a brain can be thought of as a tool, like any other, to make sense of and expand an animal’s sensorium and to efficiently use the information, it seems clear that aquatic mammals have successfully re-created the benefits of moving onto land.8
On land, mammals see to the horizon three miles away, but in the sea, they hear it. On land, cave-dwelling bats and two also-cave-dwelling bird groups evolved echolocation, which is a kind of sonar ability to create sound and understand, from the way it bounces back, what lies ahead. These echolocating land species, most or all of whom dwell in lightless caves, faced a similar visual difficulty in the cave as does life underwater, which proves that the brain, as always, does the best it can with whatever information it is given—sound, light, or touch are just fine, if it is all a brain can get.
At birth, the empty brain knows no stories. Seeing is an experience-based inference performed effortlessly and expertly by the adult human brain. The first time your brain lied to you was the second time you opened your eyes. Teenagers with cataracts, blind since birth, upon opening their eyes after cataract surgery and seeing for the first time experience featureless, depthless, shadowless blobs.9 They could “see” the same number of photons as an expertly seeing adult but their brain sees nothing in the raw stream of light. Their brains had never seen a coincidence before. They had never walked past a brick wall’s corner and noticed the lines of light bend around its edge or watched shadows elongate at dusk or compared, from all angles, sunlight shining through a tree versus the light given off by a tree full of tiny white candles.
All these kinds of stories—the girl’s of her laughter, the blind of their first visions, and the newly awakened of their sleep—are how the brain hides its strange workings from its owner. Its apparent effortlessness comes at some cost. After the Italian explorer Marco Polo spotted a rhinoceros, in southeast Asia, while he was searching for what he believed to be a very real and very profitable unicorn, Polo wrote that unicorns are “not at all such as we describe them.” His prior knowledge and hearsay about the legendary, valuable unicorn had changed what he saw in those brief moments, because even though most of the physical and behavioral features of the rhinoceros, like its weight, coloration, skin, location, and habits, did not match what Polo knew about the story of unicorns, it did have a single horn. The simple story won. His brain’s personal history had changed what he saw. Likewise, the teenagers who saw for the first time experienced their own personal Promethean myth by stealing light and turning it into knowledge—their brains, like all of our brains once, got better over time at telling convincing visual stories as they noticed more events coincide between their movements, their brain’s guesses, and the outside world. These stories, however, though they became more convincing and useful, did not necessarily become more true.
2
Like the Rise and Fall of Pinball
There are somewhere between one and 8.5 billion ways a brain can work, which means there are between one and 8.5 billion factorial ways of looking at consciousness.
One way to think about the evolution of a simple brain into a primate brain is like a power station that had to transition, all without ever once shutting down, from coal stoves to steam turbines to electric wires to nuclear to solar to an AI-powered, fuel-agnostic grid. One could easily imagine how dreadful this would be to maintain, with pneumatic tubes sticking out in the wrong places, control panels leading to nowhere, extant-parts lists, corrosive materials, and software incompatibilities.
A colleague once told me he preferred the analogy of a car, unable to turn off its engine, all while upgrading from a Roman chariot to a Tesla.1 I prefer a third version of the story: pinball, because pinball machines were forced to evolve into both story and storyteller as, once, the brain did too, en route to consciousness.2 Both are the result of a series of add-ons and user constraints impossible to plan for at the beginning. As such, modern versions of both have legacy strengths and legacy faults.
Like life, the game of pinball is never won but, instead, can be lost less badly at some times than at others. The threshold for what counts for any person as a sufficiently good score—or, as it was once described, the moment during play “when the sun goes down and the stars come out”—is as subjective as the threshold for any one person to live a good life.3 Metaphors tend to stick to pinball machines like the gum on their undersides because every game, also like life, has so much that feels like chance but isn’t and so much that feels like the opposite of chance but also isn’t.
The last universal common ancestor, or LUCA, of all modern pinball machines can be traced to 1871, to a British inventor, Montague Redgrave, who was granted U.S. patent 115,357 for “Improvements in Bagatelle.”4 Bagatelle originated in France, in 1777, at a party thrown for Louis XVI at Château de Bagatelle; it can mean “a thing of little importance,” “a very easy task,” or “a game in which small balls are hit and then allowed to roll down a sloping board on which there are holes, each numbered with the score achieved if a ball goes into it, with pins acting as obstructions.” Redgrave, in his improvements to bagatelle, evolved the game by adding a spring plunger, reducing the size of the ball to a marble, and inclining the field into which the ball is thrust.
Early-twentieth-century pinball games that evolved from bagatelle had none of the modern trappings of today’s pinball machines, such as flippers, coins, or legs. They sat atop a desk, like large liquor barrels, and one changed the course of the ball as it sloped downward in three distinct ways, as one also does the course of history—slightly, by nudging it; heavily, with all one’s weight; or accidentally, while attempting to do other things. To hinder people from simply picking up the machine and modifying the ball’s trajectory, designers added ungainly legs in the early 1930s and made the machines heavier, which only disadvantaged the weaker or less leveraged players.
And so, in 1934, a tilt mechanism was introduced to these still proto-pinball machines that prevented the player from moving the machine more than a set amount. Machines could also, for the first time, plug into electrical outlets, allowing them to produce lights and sounds and compete with the sensorium dazzles of motion pictures and World’s Fairs, which were still a thing.
The flipper as we know it today was introduced, in 1947, with the game Humpty Dumpty, which had three flippers to a side and which, like the development of multicellularity, the human hand, and the atomic bomb, changed the world overnight. Everything preflipper looked instantly vintage. Flippers turned what had been a game mostly of chance into something that one could be good or bad at, compete at, wager on, fight about, cry over. Humpty Dumpty looked, to the American consumer, how the bow and arrow must have looked to an Ice Age man after seven hundred thousand years of variations on the hand ax. The “flipper bumper” introduced a way to control Brownian chaos; it meant, contra Newton, that entropy could be slowed and maybe even reversed. The ball had been exclusively a downward-trending thing, but now it could rise, like America, from the ashes of the worst world war man had seen. Unlike death and taxes, the steel ball’s demise was no longer inevitable. Protestant America was introduced, again, to reincarnation.
By the 1950s, there were two major aesthetic and functional styles for the machines, which divided players into those who preferred the symmetric machines and those who preferred asymmetric ones. These early machines were much slower than modern pinball, less concerned with points, and were mostly concerned with the completion of small, tactical errands that required step-by-step precision and nonrandom sequences. When solid-state, muted digital computer boards made their way into pinball cabinets, the designers added back the sounds of the clinks, ratcheting gears, bells, and whistles to satisfy the nostalgia gene for the sounds of a writing-on-the-wall, bygone analog era. Slowly, the game became less about accomplishing a goal and more about the accumulation of arbitrary, outsize points. (Even today, some pinball machines reward points simply for playing, as if rewarding the act of inserting a quarter or pressing START alone, even if a single flipper is never hit.)
As the games became more and more about points and stringing those points into sequences of more points, there was a creative lull in the industry. Everything had been tried at least once and, even more damning, the world outside pinball was becoming a lot more interesting and interactive. Hollywood had just had its golden decade, with Star Wars, 2001, Jaws, Apocalypse Now, and The Godfather. Video games like Pac-Man, Missile Command, and Frogger were mainstream, cheap, social, and had mapped the goals of the species—avoiding predators, survival, crossing the road—onto buttons and joysticks.
And so, in 1986, pinball made one of its final, and riskiest, gambits: it became fiction. The game High Speed introduced a narrative arc and was an instant hit. As easily as most infants know how to grab and suckle, players immediately understood on an instinctive, motor level that the goals of High Speed were to change a stoplight from green to red, run the light, and flee from the police who gave immediate chase. Suddenly, the ball was not just a ball but a sports car. The player no longer saw, in the oily reflection of the glass top, their own face but rather the faces of Bonnie or Clyde. They were, for the first time, not playing with the small, metal ball but as it.
The entire narrative capacity of the human brain to find story in inanimate objects was suddenly brought to bear with every quarter. In the 1940s, psychologists Fritz Heider and Marianne Simmel made a short animation where simple geometric shapes like triangles, lines, and circles of different sizes would bounce around the screen and occasionally clump, bounce off each other, or follow the other shapes around.5 When viewers were asked to describe what they saw, they described creatures in conflict and told the tale of the shapes with genders, villains, emotions, and moral feats of high heroism in classic story and character arcs. Of course, these conclusions are illusory, as illusory as the stories we tell ourselves about why we laugh after doing so, even if it’s because a surgeon has an electrode in our brains making it all happen. High Speed is to the Heider-Simmel illusion what fentanyl is to morphine and became, in its potency, more of a threat to the belief industry than the gambling industry, because what was happening in its players’ brains resembled animism more than entertainment.
Copyright © 2022 by Patrick House
Copyright © 1987, 2016 by Eliot Weinberger
Copyright © 1936 by Houghton Mifflin Harcourt Publishing Company, renewed 1964 by T. S. Eliot. Copyright © 1940, 1941, 1942 by T. S. Eliot, renewed 1968, 1969, 1970 by Esme Valerie Eliot