1. BYPASSING THE BODY
Like generations of neurosurgeons before him, Leuthardt had implanted the clinical grid of electrodes to measure the brain’s action potentials, tiny pulses of electricity neurons emit each time they exchange information with nearby cells. The technique, known as electrocorticography, or ECoG, doesn’t record individual neurons. Rather, the grid’s electrodes pick up the collective activity of the thousands of neurons that lie beneath them, registering their summed rhythms as brain waves.
By tracking Brookman’s brain waves, neurologists could observe when his normal brain activity was interrupted by the beginnings of a seizure. Instead of the normal up-and-down signal of, say, an alpha wave, Brookman’s brain would become erratic as a cluster of neurons began firing in synchronic bursts. The renegade cells would inevitably recruit more neurons to their ictal cause, triggering Brookman’s brain waves to grow chaotic as the epileptic storm pulsed across the brain.
It had been crucial during the implantation surgery for Leuthardt to install sensors over the entire seizure focus area. Ample coverage would enable neurologists to divine an epileptic source by noting, essentially, that brain waves first became erratic below a specific electrode. “It’s kind of like a murder mystery,” Leuthardt said. “You’re trying to find the criminal. You can use external studies like MRI and PET scans. Those will tell you the general region—the criminal’s zip code. But now we need to find his address. Electrodes give us very specific localization.”
Using a similar technology, neuroscientists have long listened in on individual neurons with penetrating electrodes. Piercing their hair-thin wires into the brains of monkeys and rats, these scientists spent years searching for repeated firing patterns in individual cells. As the animals performed repetitive gestures like pressing a lever for a juice reward, the researchers found that specific neural patterns were associated with the physical action. There was a lot of room for error, but the neural patterns were often consistent. They were also repeatable: a neuron would erupt in a similar firing pattern each time the animal performed the bar-pushing action to receive its juice reward.
Around 2000, however, a handful of researchers began transforming this information into a brain-computer interface: whenever the cell produced the desired firing pattern, the computer would execute a physical command. Of course, most animals were none the wiser and would continue to press the lever to receive their juice reward. But over time, they realized they didn’t need to physically press the lever to get their reward. They had only to think about it.
The clinical application for brain-computer interfaces seemed clear. Physical paralysis essentially is a communication error between the central nervous system and the branching network of peripheral nerves that radiates from the spine. In healthy bodies, the brain sends signals to the spine (Walk! Sit down!), which in turn relays the details to the peripheral nervous system (Lift the right leg! Bend at the hip!). Paralysis occurs when some relay point along the route stops working—be it through spinal cord injury, amputation, stroke, or some other form of neuron death. The brain may continue to send signals, but the message never arrives.
By the time Leuthardt entered the field in the middle of the first decade of the twenty-first century, researchers had already shown that by sinking individual electrodes into the brains of monkeys, rats, and some humans, they could tap movement at its source. A single neuron provided enough information to create basic computer commands, bypassing an animal’s peripheral nervous system to give subjects modest neural control over machines. In some cases, this meant linking neural patterns associated with moving a joystick to give monkeys direct control over a cursor. In others, it meant controlling a robotic arm or mentally pressing a lever to deliver a juice reward. The control was basic, enabling the animals to move a cursor to the left or the right, or a feeding bar up or down.
But these early researchers were tapping only a handful of neurons. What sort of control could they produce if they harnessed, say, a hundred or even a thousand neurons in a person? Could they give people full control over a computer, enabling them to send e-mail or surf the Internet using only their thoughts? Could they re-create the elegant movements of the human arm? Harnessing thousands of neurons, could researchers craft a full-body exoskeleton for quadriplegics or soldiers? And how about abstract thoughts? Given ample neural access, could we bypass spoken language altogether, doing away with its ambiguities and miscommunications in favor of direct neural exchange? In the realm of memory, could brain-computer interfaces enable total recall? Could they deliver new sensory modes like infrared or X-ray vision? What was to stop these technologies from enhancing our own cognition? Could we selectively stimulate the brain to boost learning?
Those early brain-computer interfaces might have been confined to basic physical commands, but Leuthardt saw in them a union that could fundamentally change our understanding of the brain. “I saw neuroprosthetics in the very early, seminal stages,” he said, “and I thought, this is it. This is the future.”
Leuthardt was not alone. The field was already thick with speculation that scientists could craft a neural augment for people with paralysis. In 1998, an Irish researcher named Philip Kennedy demonstrated that he could endow a man paralyzed from the neck down with rudimentary control of a computer program. One year later, the German researcher Niels Birbaumer used EEG to enable similarly impaired patients to control basic word-processing software, and by 2001 one of the field’s titans, a neuroscientist named John Donoghue, cofounded Cyberkinetics, a neurotechnology company aimed at developing commercial brain-computer interfaces. Other researchers were using electrodes to unlock the brains of monkeys. In one headline-grabbing experiment, Duke University’s Miguel Nicolelis connected the motor cortex of a rhesus monkey to a robot arm in the next room. Using only its thoughts, the animal harnessed the arm to play a simple video game. “At that moment,” Nicolelis wrote, “the cumulative years of research and the hopes of thousands of severely paralyzed people who dreamed of one day regaining some degree of their former mobility became deeply intertwined.”
Still, there was a lot of work to do. These early efforts were a far cry from the sort of always-on commercial device Leuthardt envisioned. And that’s to say nothing of crafting a brain-computer interface, or BCI, to rival the elegance and diversity of biological movement.
What’s more, the interface itself was problematic. Penetrating electrodes might have enabled brain researchers to enter an intimate exchange with the brain’s most basic unit—the neuron—but they were also unreliable. Like the rest of the body, the brain abhors foreign objects, and while the platinum sensors created a close union between mind and machine, it was often short-lived. The brain eventually mounted an immune response, dispatching micro-glia, astrocytes, and other proteins to cordon off the offending electrodes. Wrapped in successive layers of scar tissue, the electrodes inevitably lost their sensitivity. Signal quality degraded, sometimes in a matter of months, rendering the implant unusable. “There was no way that was going to work,” Leuthardt thought. “If these microelectrodes were not lasting longer than six or seven months, there was no way a neurosurgeon would ever want to put this into a patient commercially.”
Electroencephalography, or EEG, was an option, but surface electrodes had their own problems. It was a rare individual who would be willing to spend his life in what amounts to a sensor-studded swimming cap. More important, though, surface electrodes provided only a hazy portrait of the electrical storm raging inside the skull. Placed directly on the scalp, EEG electrodes can’t always differentiate between the electricity inside the brain and the electrical pulses that animate the scalp. It leads to a muddy signal, adulterated with muscular electricity and even surrounding electronics.
At the time, researchers confined themselves to either EEG or penetrating electrodes. Those interfaces were fine for the research lab, but Leuthardt was convinced that if he and his fellow scientists were ever to usher in the age of neuroprostheses, they would need to enter the commercial market, crafting a highly sensitive, accurate interface that wouldn’t degrade over time.
“That’s what got me down the road of ECoG,” he said. Unlike penetrating electrodes, the ECoG grids did not pierce the brain. Rather, they rested on its surface and would likely be more stable. Having direct contact with the brain also meant that, unlike EEG, ECoG signals weren’t as likely to be contaminated by muscular artifacts from the scalp or nearby electronics. It seemed like the Goldilocks zone: more stable than penetrating electrodes, more precise than EEG. “I’ve always seen us as being the bed’s just right in the sense that this one is too invasive, that one is too noisy, but this one is just right.”
If an EEG was like listening to the muffled strains of the neural symphony behind a band shell, penetrating electrodes were like training a microphone on a sole musician or an individual string. ECoG, by contrast, was like listening to a section of the orchestral brain from the first few rows—close enough to tease out the first violins from the second violins.
But here was the real beauty of using ECoG: as a neurosurgeon, Leuthardt already had a built-in population of human research subjects. During the week or so that patients like Brookman were implanted for epilepsy monitoring, they were effectively lying in a hospital bed just waiting to have seizures. The rest of the time? The electrodes simply sat atop the brain, passively recording its electric hum. All the elements were there. Why not use the clinical setup of the epilepsy-monitoring unit to create an entirely new brain-computer interface?
At the time, all but a few neural implants were used for limited periods of time and only in the laboratory. But a neural implant that could pull detailed information from the brain while also sidestepping the glaze of signal-degrading scar tissue? A device like that could form the basis of a commercial implant that would remain in the brain for years. “It became very clear to me that this was the future,” said Leuthardt. “It’s a whole new universe that opens up—one that can change the human experience.”
* * *
To that end, David Bundy arrived at the epilepsy-monitoring unit a few days after Brookman’s surgery with a cartload of electronics. As a graduate student in Leuthardt’s research lab, Bundy was hoping Brookman would don a sensor-studded glove. He wanted him to flex his fingers so he could calibrate the movement to Brookman’s brain activity, the first step in building a BCI.
But Brookman was still recovering from surgery. His eyes fluttered and his head nodded lazily as he slouched semiconscious in the hospital bed. Shirtless, he wore a pair of thin cotton shorts, and his head was wrapped in a turban of gauze dressing, a Gorgon-like mane of wires exiting the right of his skull.
Normally, the tangle of wires that spilled from his head would transmit Brookman’s brain waves to a bank of computers down the hall. But Brookman had agreed to be one of Leuthardt’s research subjects, and for an hour each day grad students like Bundy connected his cables to their own cart of amplifiers, digitizers, and computers.
Leuthardt kept the amplifier in what’s known as a Faraday cage, a wood-framed box wrapped in copper mesh to isolate the device from surrounding electronics. From one side of the cage tumbled a rainbow-colored cascade of wires that linked the amplifier with the leads exiting Brookman’s brain. From the other, the amplifier connected to a computer whose screen showed a graph of brain waves from each electrode.
It was a surprisingly ad hoc affair, with single-serving cups of orange juice, travel-sized bottles of Listerine, and toothbrushes still in their wrapping strewn across the room. These signs of the family’s vigil were everywhere: a half-eaten cluster of grapes sat on a table next to individual-sized bottles of body wash and shampoo.
Meanwhile, Brookman’s mother and aunt watched warily from a pair of vinyl-covered lounge chairs. The room’s blinds had been drawn against the morning sun, and a nurse in maroon scrubs sat quietly in the corner, prepared to intervene should Brookman seize during the testing.
It was no empty measure. One day earlier, Brookman’s eyes had rolled back in his head and his body stiffened just as the grad students were setting up their equipment. They beat a quick path to the door as Brookman fell into convulsions, the day’s research session scuttled.
Now Brookman seemed only slightly awake. “We want to see what your brain signals are doing when you’re moving your hand in different ways,” said Bundy, explaining how they wanted to correlate the movements of the sensor-laden glove to specific brain waves. “The goal is to help people that maybe have spinal cord injury or amputation so they can have a prosthetic hand.”
Bundy told Brookman he’d be following a series of simple prompts to link, or calibrate, his brain waves to the movement of his hand. He’d need to flex his thumb, extend his index finger, and pinch with his thumb and index finger. Once they’d calibrated the glove, they would move on to the task itself: Brookman would think about making a specific hand gesture to mentally control the up-and-down movement of a column on the monitor.
“Does that sound all right?” Bundy asked after explaining the task.
“Yeah,” drawled Brookman, only half-awake.
“Does that make sense?”
“Yeeeeaaaah.”
But it didn’t make sense. Brookman lagged behind the simple prompts, incorrectly pinching or extending his forefinger five seconds after the computer prompt. By then, the computer had moved on to the next prompt, and Bundy had to start the program again after a few failed tries.
“Just flex your thumb and then extend it out,” Bundy said. “A pinch would be just bringing your thumb and your finger together—just like this,” he said, making an “okay” sign with his right hand.
“Just your thumb, baby,” Brookman’s aunt interjected. “Keep your hand open and just do your thumb. Are you awake, baby?”
Brookman was, but only barely. Though he normally took a host of antiseizure drugs, neurologists had taken him off his medication to better locate his seizure focus. “We want to make sure we get the seizures, because occasionally we’ll put all these electrodes on, and they won’t have any seizures,” said Brookman’s neurologist, Hogan. “We were pretty aggressive.”
The neurologist needn’t have worried: without medication, Brookman had suffered some twenty-five seizures in the first twenty-four hours following the surgery, more than Hogan had ever seen. Emerging disoriented from these rolling convulsions, Brookman didn’t know where he was or what had happened. He’d been wild in his panic, trying to rip the wires from his head and lashing out. It got so bad that at one point the hospital staff restrained him with leather straps.
But now Brookman was dazed and docile with pain relievers. He was meek, eager to work with the researchers, and fearful he would disappoint them.
“Can you understand what he’s saying?” his aunt asked.
“Yeeeeaaaah,” Brookman moaned as the computer prompted him to make a fist.
“Can you make a fist?” she coaxed as he brought his fingertips slowly to his palm. “Good job!”
“Can you flex your thumb?” Bundy jumped in, following the computer prompt.
“Just your thumb, baby,” said his aunt, a woman with spiky brown hair and a peach-colored blouse. “Do it with your thumb.”
But Brookman moved both his index finger and his thumb, moving them slowly in unison.
He clearly wasn’t up to the task. Brookman’s seizures, coupled with the pain medication, kept him semiconscious. He was easily confused, nodding off in the middle of tasks and unable to follow the simple instructions.
“I think we want to just let you rest,” Bundy finally said after several failed attempts. “We might try to come back in a little bit.”
“I’m ripped up,” Brookman apologized.
“We’ll let you rest.”
“Let me rest to where I can at least see straight,” Brookman said. “I’m so tore up right now.”
“That’s understandable,” Bundy responded.
“No matter what, I promise to God and cross my heart I’ll make sure I get the job done right,” Brookman said. “I’ll make sure they get the best possible stuff.”
“It’s okay,” his aunt said. “They know you will.”
But it was too late. Brookman was becoming upset, his eyes brimming with tears and his drug-lazy voice tensing with frustration.
“I just can’t see straight,” he said.
Bundy, a Texan with a full beard and large ears, shifted near the bed, made uneasy by Brookman’s frank emotion. Meanwhile, another grad student, Nick Szrama, ventured that Brookman was already “helping out quite a few people” as he and Bundy began packing up their research equipment.
“Okkkaaay,” Brookman murmured. “If I could see straight, I’d be able to do this.”
* * *
Challenging though they are, difficult research conditions are in some ways the least of Leuthardt’s concerns. The neural matrix is wildly complex. We understand very little of even the brain’s most basic functioning, and its three pounds of neural tissue do not readily yield their secrets to the system of 1s and 0s Leuthardt and his cohorts would use to reveal its mysteries. And that’s to say nothing of the more basic biological problem researchers encounter when they try to join the hard stuff of electrodes to the squishy tissue of the brain.
With so many unknowns, Leuthardt’s vision of creating a meaningful union between mind and machine could ultimately remain little more than a twenty-first-century parlor trick—clunky and limited, but catnip to futurist nerds whose imaginations catch fire each time a researcher with an electrode cap crops up on YouTube. His dream derailed, Leuthardt may one day be remembered only as a neurosurgeon who in the early twenty-first century began amassing a superhuman arsenal of intellectual property. At last count, he had more than 860 patents on file. (“Thomas Edison had 1,093,” he quipped. “So that’s my goal.”)
In this telling, Leuthardt’s Wikipedia page may someday mention that he was born to immigrant parents. That his father moved back to Germany. That he was raised lonesome in working-class Cincinnati by a single mother. That he once published a science fiction novel, dabbled as an abstract painter, and had a yen for objectivist philosophy, futurism, and handguns.
Still, these are but the ornaments of a life, personal statistics that are never all that illuminating. In the meantime, however, Leuthardt couldn’t help it: his mind seemed always to be reaching for some essential through line that would create a new opportunity from a current task. He had found just such a line when he forged his research lab from his surgical practice. Those twin enterprises spawned new inventions—brain retractors, electrode grids, novel brain catheters—that inevitably led to new inventions, new patents, and new start-up companies. It was all of a piece for Leuthardt—exponential results, he called it.
Leuthardt’s real genius, though, was his knack for temporarily lashing together the minds of academics and clinicians, pushing them to engage their brains in ways that don’t come naturally to academics or clinicians. He pressed them into service not to idly toss around a few abstractions while stroking the collective beard. His quarry was something more tangible. He was looking to extract results—technical fixes to problems the rest of us hardly knew existed. That, and to stockpile a defensible armory of intellectual property. “You’ve got to identify the problem, then you can find a solution, or you find a solution and pair it with a problem that matches,” he said. “You don’t have to know everything. You engage people who know more than you, and then you create an environment that can accomplish things that none of you could have done by yourself.”
Ideas for Leuthardt were not some delicate species that crept quietly in the night. Nor were they violent strokes of insight that flashed through the mind of the toiling genius. They certainly didn’t come out of thin air. For Leuthardt, ideas were like reptilian young. “You spawn a lot of them,” he said, “and see which ones survive.” The trick was to uncover and nurture them. Ideas percolated during conversations. They teemed forth in the operating room. They sprang from inefficiencies in patient care and emerged after months of painstaking research. But the best ideas occurred at the margins—that intersection, say, between neuroscience, biomechanical engineering, and cardiology—liminal spaces where intellectual outsiders could tackle long-standing problems. He pushed others as he pushed himself. Those he pushed came to believe in the process, their world revealing itself as a series of solvable engineering problems and legally defensible solutions.
“We get criticized for always looking at the possibilities and not being realistic. But nothing good happens if you just focus on what’s going to prevent you from getting to the next stage,” he said. Problem solving and positive thinking were skills. You had only to train your mind. “You pester your subconscious by constantly trying to think of a solution and not coming up with one. Then you let it go,” he said. “You let your internal—that area below conscious awareness—work on it, and invariably something pops up.”
This was the sunny, future-tense world Leuthardt was forever saying he liked to escape in the operating room. He insisted surgeries were the most relaxing part of his week, when the yeoman tasks of cutting and sawing and suctioning trumped his business plans and inventions, his collaborations and research. He swore he found it relaxing, meditative even, as the rest of the world receded behind the glare of surgical lights and he could steep himself in the minutiae of the moment, cutting through layers of flesh and bone, excavating the brain. There were no distractions in the OR. There was a purity of purpose where he would often work uninterrupted for eight- or even twelve-hour stretches at a time, a much-needed reprieve from his chase for answers.
Or at least that was the idea.
But with his frontal lobe engaged by the delicate task of slicing through neurons and the not-too-delicate task of sawing through skull, his brain’s hindquarters would inevitably begin to sift a problem, subconsciously deconstructing it until, pop! A solution sprang forth.
Leuthardt needed that pop! of the new as much as a marathoner needs a runner’s high. A “professional anorexic” was what he called himself, and for all his avowals that surgery was a sanctum and that he longed for its singularity of focus, the future still beckoned.
And nowhere was that call louder than in epilepsy surgery, a two-step operation that not only allowed him to plug his electrodes into the human brain but enabled him to do so for weeks at a time.
* * *
Leuthardt’s idea, or at least his germ of an idea, was to build a company around the core technology of ECoG, making neuroprostheses for the consumer medical market. Once he’d established a beachhead, showing that neuroprostheses were both safe and effective, he believed the technology would spread to other medical uses. “You’ll start to see the collateralization of that technology—to spinal cord injury and hopefully traumatic amputation,” he said. He was convinced that once neuroprosthetics had successfully established a footing in the medical world, they would eventually achieve something even more momentous: brain-computer interfaces that augmented human ability. “The big leap happens once we become good enough that the implant gives you some type of social advantage. It’s all simple stuff right now. We’re all playing Pong. But Pong evolved. Pong evolved into Xbox 360,” he said. “It’s a natural extension of human behavior. If you can change yourself so you can facilitate things you want to do? People will do that. That’s the grand horizon. Essentially, you’ve unleashed the brain on the world.”
Today’s neuroprostheses may be in beta form, but that hasn’t stopped the army from funding Leuthardt and his colleagues’ research into language. Working with researchers in New York, the group is trying to decode the neural basis of language, raising the possibility that someday soldiers will be able to communicate using only their thoughts—a sort of digital telepathy. Meanwhile, the Defense Advanced Research Projects Agency, or DARPA, the blue-sky research arm of the Department of Defense, is funding scientists in Southern California who are trying to craft a neuroprosthetic for memory. Led by Theodore Berger, these researchers are working in mouse and monkey models to develop a BCI that would bypass the hippocampus, a sea-horse-shaped brain structure essential to memory. By analyzing the change in neural firing rates as they enter and exit the hippocampus, Berger and his colleagues have developed what they believe is a sort of meta-algorithm of memory. Scientists first disable the animals’ hippocampi, ensuring they have no working memory. Then, using electrodes, the researchers record incoming sensory data to the disabled hippocampi in a bank of computers, which processes the action potentials to mimic the function of the hippocampus. Researchers then stimulate the subjects’ brains with the transformed firing patterns, creating basic memories with the prosthetic, such as where to find a food reward.
In other research, human subjects can use their brains to control digital avatars, and quadriplegics are again feeding themselves using thought-directed prosthetic limbs.
As with the Cold War push to develop atomic weapons, or the midcentury race to discover the structure of DNA, the government is funding much of today’s BCI research. That investment grew in 2013, when President Obama announced the BRAIN Initiative, which adds another $100 million to brain and BCI research. Like those earlier races, the emerging field of neuroprosthetics is filled with warring factions. The competitors are again ambitious and highly accomplished—colleagues turned rivals who compete for government grants, scientific dominance, and fame. “These guys know that there will be a Nobel Prize,” said one of the field’s giants, Miguel Nicolelis. “It’s become really, really competitive.”
But unlike those earlier contests, where scientists worked under the banner of a government or university, BCI is coming of age when universities are looking for any competitive edge their employees’ intellectual property may bring. Lured by the potentially mammoth payouts of the private sector, scientists like Leuthardt are bucking against the sober confines of traditional academic research. It’s an entrepreneurial world, where students are schooled in the art of presenting their findings to medical device makers and researchers are mingling with venture capitalists in the hope of monetizing their results.
Like Leuthardt, many researchers are working to develop BCI for clinical applications, but many are equally, if not more, excited about the technology’s potential to amplify human ability. “We don’t know how far we can go,” Kevin Warwick, a cybernetics researcher at the University of Reading in England, said. “What can we do if we link a human brain more closely to a computer network? What opportunities does that open up? You’re into the matrix, and to say, ‘Oh no, that’s just science fiction…’ Well, no.”
Warwick made history in 2002 when he had a grid of a hundred microelectrodes implanted in the medial nerve of his left arm. It wasn’t a direct cortical implant, but the device picked up neural activity from his peripheral nervous system, enabling him not only to control a robotic hand linked to the nerves in his arm but also to perceive sensory stimulation from the electrodes. As the robot hand gripped an object more tightly, the electrical pulses to the stimulating electrodes increased in frequency. “The brain makes the best sense it can from the signals,” he said. “It wasn’t likening it to anything else. It didn’t think of it as being my hand gripping in terms of my biological hand. It took it on board and used the signals for what they were.”
In a final flourish, Warwick’s wife, Irena, received a similar implant, enabling the pair to “link” their nervous systems over the Internet. The technology was crude: Warwick’s electrodes delivered an electrical pulse each time Irena moved her hand. Rudimentary as it was, however, they had merged their nervous systems in some small way, projecting their movements far into the digital realm to endow the couple with a neural awareness of each other’s movements.
For Warwick, limiting the use of a BCI to an exoskeleton or a neurally controlled prosthetic is a “conservative” view of the body. “You’re just making the body a little bit more powerful, or giving somebody a slightly more powerful arm,” he said. “You can have a completely different concept of the body your brain is controlling. It doesn’t have to be arms and legs. It can be any type of technology you want. The whole concept of the body can and will be considerably different.” To researchers like Warwick and Leuthardt, neuroprostheses do not only challenge our traditional notions of the human body. Rather, they believe BCIs will fundamentally transform our understanding of the brain, consciousness, and what it means to be human. “If you link your brain to a computer brain with different sensory inputs and different mathematical abilities, you’re into this sort of thing where a computer can deal in multidimensional processing,” Warwick said. “Instead of thinking as your human brain does in three dimensions, you can start thinking, potentially, in twenty or thirty dimensions. What does that mean? No idea! You’re into a whole different world really.”
Just this sort of research is already taking place at Nicolelis’s lab at Duke University, where researchers have used infrared sensors and stimulating electrodes to enable research animals to perceive the infrared portion of the light spectrum—a “sixth sense,” as Nicolelis calls it. In a separate experiment, Nicolelis is using computers to merge the brains of lab animals. Using electrodes to record neural activity from one animal, the scientist uses those same firing patterns to stimulate the brain of a second animal—enabling the second animal to share the experiences of the first. “The brain is so plastic that it can incorporate another body as its source of information to probe the world,” Nicolelis said. “If we take this idea really seriously, we could assimilate anything that gets in contact with the brain—including another being, including the body of someone else. That touches on theories of self, theories of identity.”
Bolstered by futurist writers like Ramez Naam and his promise that future brain implants will seem “as natural as breathing,” these researchers point to Moore’s law, the Silicon Valley adage that computing power doubles every two years, allowing devices to become smaller and faster and cheaper. This principle of exponential growth, so named for the Intel cofounder Gordon Moore, has held true since the birth of computing more than half a century ago. It has certainly held true for today’s smartphones, which bear little resemblance to the room-sized computers of the 1960s.
Like those early machines, many of today’s most advanced brain-computer interfaces are wired and bulky. They often require a cart of computers the size of a dishwasher to function properly, and that’s to say nothing of the two or three technicians who must be on hand to keep the interface chugging along.
Nevertheless, the field is already shrinking and enhancing its technologies by creating lighter, more efficient power sources and better neural interfaces that can communicate wirelessly with networks. “When I started twenty years ago, you had to have a roomful of equipment to record twenty neurons,” Nicolelis said. “We are getting close to a thousand neurons now, and it’s about two inches by two inches, the little chip. It’s moving much faster than we expected.”
Neural stimulators, implants that deliver small pulses of electricity to specific brain regions or parts of the peripheral nervous system, have been on the market for years. More than 300,000 deaf patients have received cochlear implants, which stimulate the acoustic nerve to approximate natural hearing, since the devices first gained FDA approval in 1984. Similarly, deep-brain stimulation, or DBS, which uses surgically implanted electrode leads to deliver pulses of electricity deep inside the brain, has been approved to mitigate the effects of several movement disorders, including essential tremor and Parkinson’s disease. Research studies have shown that DBS can also be effective in treating severe depression and chronic pain. Meanwhile, NeuroPace, a neurotech firm in Washington State, recently won FDA approval for its brain implant that uses small doses of electricity to disrupt epileptic seizures as they emerge. Similarly, the FDA recently approved the Argus II, a visual prosthetic that uses a camera mounted on a pair of glasses and a retinal implant to endow otherwise blind users with a rough approximation of vision.
These early BCIs remain in the medical realm and relatively crude. The Argus II, for instance, contains only sixty stimulating electrodes, something like the equivalent of downscaling a 1080p HD screen to 60 pixels. Although the system’s camera may record the entire scene, the image converter must drastically reduce the image to conform to the implant’s parameters. The result is a black-and-white image that features mainly objects with definitive lines in sharp relief, like street curbs or doorways.
Nevertheless, the Argus II has helped otherwise blind individuals navigate city streets, and the company is at work on future models with color vision and even zoom lenses—innovations that lead some futurists to foresee a day when neuroprosthetics will enhance human ability.
“As you lie there on the operating table, the doctor makes a tiny hole in your skull, through which she inserts an incredibly light, flexible mesh of electronic circuits,” Naam writes in his transhumanist manifesto, More Than Human, imagining a day when elective neural implants are as common as teeth whiteners are today. Over time, Naam imagines, “you routinely trade memories and experiences with other implanted humans. You learn to view the world through other people’s eyes. You let others see through yours. As the months and years pass, you increasingly view your implant as a vital and natural part of you. Using it becomes as natural as breathing. You can no longer imagine a disconnected life.”
Undoubtedly, years of research and a thicket of scientific and technical hurdles must be cleared before people will start lining up for elective wireless implants that can be synced to HVAC systems or the Internet. But while researchers like Leuthardt recognize these many obstacles, they remain convinced that this future is on the not-so-distant horizon. “Science progresses accidentally and sometimes exponentially. Very rarely do you get a linear progression,” said Leuthardt, who believes elective neural implants will be available inside of two decades. “So is it unreasonable to think about people using these things to enhance their abilities? If these things are minimally invasive or noninvasive, easy to apply, and easy to use? Probably not.”
* * *
In some essential sense, we’ve been enmeshing our lives with tools ever since Homo sapiens emerged from the hominid line some 200,000 years ago. Be it a spear, fire, eyeglasses, a computer, the printing press, or the wheel, one trait that separates humans from most other animals is our sophisticated ability to fashion diverse technologies to amplify our power, intelligence, and abilities. These tools quite literally become us. Many remain outside our bodies, but with others (like pacemakers) we enter a relationship so intimate that the nonbiological device disappears as it is integrated into our own self-perception. The tool becomes invisible, an augment that, while not inborn, we nevertheless adopt as our own. Medical devices like pacemakers and hip replacements are only the most obvious examples of assimilated technologies, but tools that are not physically merged with the body can also become so integral to our consciousness that they are all but invisible.
Take writing, a technology in that it requires tools and is not an innate trait. Writing is a skill. It must be acquired. Plato himself was skeptical of the transformational power of writing. In Phaedrus, the philosopher wrestles with the primacy of speech, which he considered natural, versus writing, which he deemed a shabby counterfeit. “There is something yet to be said of propriety and impropriety of writing,” Socrates tells Phaedrus before recounting the story of the Egyptian god Theuth, whom he credits with inventing the “use of letters.” Socrates recounts how Theuth presented his invention to Thamus, a greater god. Theuth argued that writing would make Egyptians “wiser and give them better memories.” But Thamus was skeptical. He worried that writing would cause people to become lazy and stop using their memories. “They will be hearers of many things,” Thamus said, “and will have learned nothing.”
Of couse, Socrates famously never wrote anything down. He left that to his student Plato, who extended the cultural memory of his Socratic dialogues. In essence, it is only through a medium Socrates loathed that we are able to even approximate what he said about writing.
But writing is more than merely a historical record or a tool to relieve the burden of memorization. Rather, writing is a technology we integrate into the brain that allows us to perform cognitive tasks we would otherwise be unable to achieve. Be it thinking through a complex ethical issue or working out an algebraic equation, writing acts as a sort of external memory device, a cognitive augment that allows us to organize our thoughts and break down complex problems into more manageable steps. As the philosopher Andy Clark observes inNatural-Born Cyborgs,
The brain learns to make the most of its capacity for simple pattern completion (4 × 4 = 16, 2 × 7 = 14, etc.) by acting in concert with pen and paper, storing the intermediate results outside the brain, then repeating the simple pattern completion process until the larger problem is solved. The brain thus dovetails its operation to the external symbolic resource. The reliable presence of such resources may become so deeply factored in that the biological brain alone is rendered unable to do the larger sums … Many of our tools are not just external props and aids, but they are deep and integral parts of the problem-solving systems we now identify as human intelligence. Such tools are best conceived as proper parts of the computational apparatus that constitutes our minds.
It’s not merely that writing or the use of symbolic figures to represent numbers enables our brains to process more abstract or complex problems. Rather, these tools become so deeply embedded in our intelligence that they essentially disappear. We identify the cognitive augment not as something outside us but rather as something that defines us.
Writing may be an extreme example of technical integration, but neuroscience is beginning to show that there’s some scientific truth to the old adage about tennis players’ becoming “one with the racket.” In fact, the neuroplastic brain actually undergoes physical changes after repeated use of a tool, expanding its map of the body to include tools like tennis rackets or eyeglasses.
Studying the brains of right-handed violinists, German researchers have found that the sensorimotor area of the brain that corresponds to the left hand (which right-handed violinists use for the instrument’s fingerboard) is larger than in nonmusicians. More pointedly, a group of Japanese researchers studying monkeys found that the animals’ brains actually regarded certain tools as part of their bodies. Using electrodes to record the animals’ neural activity, the researchers first touched the animals’ hands and arms to identify how the monkeys represented that area of the body in neural space. As they continued recording, the researchers gave the animals rakes, which the monkeys used to drag pieces of food from behind a screen. The rakes were the animals’ sole means of gathering the reward, and researchers allowed the monkeys to use the rakes for several weeks. Once the animals were accustomed to the activity, researchers began to touch the rakes. The results were astonishing: When researchers handled the rakes, the same portion of the monkey’s brain that had fired upon feeling its hand touched began to flare up. The monkeys had mentally embodied their tools, which their brains represented as an extension of the arms and hands.
From pacemakers and contact lenses, to cars, social media, and hip replacements, the distinction between what is us and what is our technology has never been murkier. But while we have always integrated cultural tools like writing, we are now absorbing technology directly into our bodies, be it through implants or wearable technologies that increasingly mediate our social, personal, and professional lives. “Machines are becoming more and more enmeshed in our personal sense of ourselves,” said Leuthardt. “This notion of advancing technology and how we are getting closer and closer to the tools we use, and the notion of body modification—they are all converging to where neuroprosthetics can go beyond being merely a tool for restoring function but actually augmenting function.”
Like the brains we house, we are wildly adaptable, and innovations that were once suspiciously regarded as levelers of culture—the outcry over, say, writing—are quickly absorbed into mainstream use. “Look at plastic surgery. Thirty years ago, all of those procedures were for people who had facial injuries or for mastectomies after breast cancer. They were restorative treatments after some distortion of that person’s anatomic form,” Leuthardt said. “Now there are girls who are eighteen who are getting breast augments before they go off to college. What was once intended to help people with deficits is now a graduation gift.”
To that end, Leuthardt and his colleagues founded Neurolutions, a venture-capital-backed company to transition neuroprosthetics from the laboratory to the free market. The company’s primary mandate, at least initially, is to restore function in stroke patients. “But then,” he said, “the platform becomes available for anything else you want to do with it. The world essentially becomes your iPad.”
Copyright © 2015 by Malcolm Gay