1The Last Fortress
I hadn’t known I would see the future when I agreed to speak at the 2018 Summit of the Wharton Neuroscience Initiative at the University of Pennsylvania. But as soon as Josh Duyan, the chief strategy officer of a company called CTRL-labs, began his presentation, the magnitude of the change and the urgency of the choices we are facing became blindingly clear.
Holding up his hands, Duyan lamented the fact that the extraordinary input capabilities of our brains were tethered to such “limited and clumsy output devices.” He noted the step backward we’ve taken when it comes to typing on our phones, moving from ten fingers to two thumbs. Just imagine how much more efficient we’d be, he said, if we could type with our brains instead or better yet, “operate octopus-like tentacles.”
Until that day, I had puzzled over how (and even whether) consumer neurotechnology could go mainstream. The then-existing applications of neurotech that enabled us to play video games, meditate, or improve our focus with our minds seemed like niche applications that were unlikely to motivate people to go about their everyday lives wearing a silly-looking headband.
But the wristband Duyan was describing seemed altogether different. Our brains, he told us, are constantly transmitting signals to our peripheral nervous system—the parts of the nervous system outside of the brain and the spinal cord. CTRL-labs’ wristband detects these signals using electromyography (EMG).1 When I move my hand, for example, the region in my brain called the motor cortex sends an electrical signal to my spinal cord, which distributes a set of signals to the relevant muscles to tell them to move. Where my low motor neurons innervate my muscles, a cascade of activity creates a current (measured in milliamperes) and potential difference or voltage (V, measured in millivolts) that can be detected by the electrodes in the wristband.
With its compact and easy-to-wear form, easy integration into existing wearables—like the smart watches it resembles—and application as an interface to other technologies like virtual reality or swiping a smartphone, this device was different in kind from anything I’d seen. It could offer a significantly better user performance for the tasks currently done by peripheral devices like keyboards and mice.
If people are willing to give up reams of personal data to keep in touch with their friends on Facebook, it seemed likely they would be willing to trade their brain privacy to swipe a screen or type with their minds.
The Last Fortress
The things we think, feel, and mull over in our minds help us define who we are to ourselves and to others. What we choose to share about those things—and perhaps more important, what we choose not to share—is fundamental to the intimacy we create with other people.
Until that day in 2018, I believed that our brains were the one place of solace to which we could safely and privately retreat. Your personal diary was always at risk of discovery. If you wrote it on paper, someone could find it and read it; if you typed it on your computer, someone or something could be tracking your keystrokes. But your brain was different. You could think that your friend’s new couch is hideous without hurting her feelings. You could think your boss was a clown while nodding affirmatively at his latest pronouncement. You could let your thoughts wander while listening to a boring speaker, fantasizing about your latest romantic interest. Or imagine new ideas or ways of doing things, without having to worry what others would think if your innovations turned out to be duds. You could work through your sexual orientation and decide when and if you would be ready to share that with others. Or you could dare to dream about overthrowing your tyrannical government.
We may soon lose that last realm of privacy. As noted, new technologies collect our brain data to help us become faster, more efficient, safer, healthier, less stressed, and even more spiritual. Just as we exchanged access to our web search history for free and powerful internet browsers, we will have reasons to want to share the brain data these devices collect. To be clear, the data itself is not the same thing as our thoughts and feelings themselves. But powerful machine learning algorithms are getting better and better at translating brain activity into what we are feeling, seeing, imagining, or thinking.
Once we become aware that others can access what we are thinking, feeling, or imagining, we may attempt to censor even our thoughts, lest we be ridiculed or ostracized for having ideas that go against the grain. Worse still, if governments gain the power to track the contents of our brains, they can arrest us and punish us for thought crimes.
I am not alone in my concerns; other scholars are starting to sound the alarm too. The neuroscientist Rafael Yuste has advocated for what he calls neurorights, because, he says, our “brain data may be one of the few remaining bulwarks against fully compromising privacy in modern life.”2 “While the body can easily be subject to domination and control by others,” the Swiss bioethicist Marcello Ienca warned, “our minds, along with our thoughts, beliefs and convictions, are to a large extent beyond external constraint. Yet with advances in neural engineering, brain imaging, and pervasive neurotechnology, the mind might no longer be such an unassailable fortress.”3 Dr. Wrye Sententia and the legal theorist Richard Boire have long worried about these issues, having founded the nonprofit Center for Cognitive Liberty and Ethics more than a decade ago.
By now, most people realize that “free” digital services come at the expense of an individual’s personal data. While Google originally sought to “bring order to the web” and provide “high quality search results,”4 it now commands 92 percent of the search engine market in the United States—all the while taking the data we enter into its search engines, web browser, and assets like YouTube and Gmail to create detailed profiles of us that they use to draw conclusions about what different people of different demographics (but increasingly each of us individually) want to see or buy.5
Tech companies’ business models rest on their ability to sell their understanding of us to others. Google does so through its “real-time bidding” process, which provides advertisers with opportunities to acquire uniquely targeted advertising real estate. Meta does much the same thing, harvesting data on its billions of users and creating psychological profiles of them that advertisers can use to microtarget their pitches.6 Shoshana Zuboff coined the term “surveillance capitalism” to describe this now ubiquitous phenomenon, characterizing “data about the behaviors of bodies, minds, and things” as “surveillance assets” that can be used for the purpose of “knowing, controlling, and modifying behavior to produce new varieties of commodification, monetization and control.”7
It’s not just tech giants that are commodifying our data, and it’s not just advertisers that are interested in it. Consumer data has also enabled a revolution in our understanding of health and disease. The personal genomics company 23andMe, for example, made headlines in 2018 when it announced that it had secured a $300 million deal to share its consumers’ genetic data with GlaxoSmithKline.8 It had already entered into data sharing agreements with other major pharmaceutical companies, including Genentech and Pfizer.
Through its business model, default settings, and privacy policies, 23andMe has included 80 percent of its 10.7 million customers in a database that associates millions of raw genome sequences with consumer demographics and other information, enabling large-scale analyses of genetic diseases and their indicators.9 That was 23andMe’s intention all along, as board member Patrick Chung explained in 2013: “The long game here is not to make money selling [saliva] kits [to collect and report on consumer DNA], although the kits are essential to get the base level data. Once you have the data, [23andMe] does become the Google of personalized health care.”10
All of which explains why, when I returned home from the Wharton summit, I dived into learning everything I could about CTRL-labs, its products, and the direction it might lead us.
I watched presentations by Thomas Reardon, cofounder of CTRL-labs, describing a future in which our interactions with technology are driven by neural interface. Some of which already exist. Google’s Dinosaur Game is a feature of its Google Chrome web browser that makes losing internet connectivity a little bit less frustrating. When Google Chrome goes offline and you angrily pound on your space bar, a pixelated Tyrannosaurus rex appears. You can use your arrow keys to make the dinosaur run across the side-scrolling landscape and jump over obstacles to earn points. Earn a hundred points, and you’ll be rewarded with a screech.
The next time the T. rex appears, you could use CTRL-labs’ Bluetooth-connected wristband to make the dinosaur jump just by willing it to do so. By maintaining the same mental focus as you work the arrows, the device uses powerful algorithms to translate the electrical activity and brain signals being sent to your hands into signals the computer can understand as commands. “Here’s the cool thing,” Reardon described. “I don’t have to tell you to stop [moving your hand]. What you start to realize is the dinosaur is going to jump whether you push the button or not.”11 Which is cool, I thought. But should we really be plugging our brains into Google?
The more I learned, the more certain I became that CTRL-labs would soon be acquired by a major technology firm. It seemed like a natural fit for Apple, as the company could integrate the EMG sensors into its already-popular Apple Watch. In time, the interfaces would allow users to track their sleep, control smart devices in their environment, send text messages just by thinking about it, and even detect signs that they are becoming dangerously drowsy while driving. To my surprise, Facebook’s umbrella company, Meta, acquired CTRL-labs instead, in September 2019, paying somewhere between $500 million and $1 billion—one of its most expensive recent acquisitions.12
Meta’s head of augmented reality (AR) and virtual reality (VR), Andrew “Boz” Bosworth, announced the acquisition on his personal Facebook page. Bosworth explained how the wristbands would become the “universal controller for all your interactions with technology.”13 So far, Meta has showcased typing and swiping with AR and VR as its likely first application, but as Meta founder and CEO Mark Zuckerberg summed it up, “In some ways, the holy grail of all this is a neural interface.”14
It’s easy to see how using neurotechnology as an interface to other technology could fundamentally change our lives. With advances in predictive algorithms, the wristband could anticipate whole words to type from single letters. Reardon calls this “word forming,” where “you’re not typing. You’re kind of forming words in real time and they kind of spill out of your hand. It’s giving you … choices between words and you quickly learn how to get to the word you want to form.” He added, “There would be no difference between how you produce oral speech and how you produce this controlled text flow.”15
Meta has not achieved word formation at anything close to the rate of speech yet. In demos prior its acquisition, CTRL-labs was only able to achieve forty words per minute (about the same rate as an average typist but significantly slower than our rate of speech, which is about 140 to 160 words per minute). But that was already double the rate achieved by other researchers, and they have undoubtedly made progress in the meantime.16 With the backing of a company like Meta, real-time neural word decoding is on the horizon.
As for my initial bet that CTRL-lab’s EMG sensors would be integrated into the Apple Watch? I was wrong about the acquiring company, but not about its intentions. Meta plans to launch its own smart watch soon. Zuckerberg posted a photo of EssilorLuxottica’s chairman, Leonardo Del Vecchio, donning the wristband as part of a joint venture with the smart glasses company. “Leonardo is using a prototype of our neural interface EMG wristband that will eventually let you control your glasses and other devices,” Zuckerberg explained.17 While Meta’s first smart watch release may not yet have the EMG sensors, the tech giant promises that future releases will have that integration.18
Big Tech Going All In on Brain Decoding
Meta may be leading the big tech pack, but scores of other companies are also in the race to develop neural interfaces. Until now, most have focused on much narrower applications. InteraXon’s EEG headset, which I was using to mitigate my migraines, helps consumers meditate more effectively through audible neurofeedback, like the bird’s chirping in response to my brain wave activity.
Myontec, Athos, Delsys, and Noraxon offer athletes and sports therapists EMG-generated insights into what’s happening with their muscles during training and competitions, such as the rate of force development (a measure of explosive strength), improvements in coordination through training, and symmetry and asymmetry in muscle activation. Control Bionics sells NeuroNode, a wearable EMG device for patients with degenerative neurological disorders like ALS/MND. It enables them to control a computer, tablet, or motor device via the bioelectrical signals that are sent to muscles to trigger movements, even if those movements aren’t visible. Kernel offers Flow—a functional near-infrared spectroscopy (fNIRS) device—that looks like a high-tech bike helmet and that measures changes in blood oxygenation levels in the brain to understand and improve its functioning.
But Meta’s investment heralds a new frontier for consumer neurotechnology, in which mainstream technology companies use neurotechnology as the new—and potentially primary—way we will interface with all their platforms.
Apple appears poised to make a similar bet, as it has hinted that it will integrate health sensors like EEG into its AirPods, much as it integrated ECG sensors into the Apple Watch.19 Other neurotech companies are charting the way, making them likely targets for acquisition. Emotiv has launched MN8, earbuds with two-channel integrated EEG sensors.20 NextSense, backed by Alphabet’s moonshot division and spun out as an independent company, believes it has the winning recipe for a brain-health monitoring platform with its EEG-earbuds, and hope to build a “mass-market brain monitor.”21 Apple may be the company’s key to doing so. In the spirit of Steve Jobs’s Digital Hub, Apple executive Kevin Lynch has extolled the value of multiple devices working together.22 EEG may be the next device in its wheelhouse.
Snap (the company behind Snapchat) has acquired Paris-based neurotech company NextMind, known for its EEG-based brain wave controller. Snap plans to integrate the technology into their augmented reality platform, to “monitor neural activity to understand your intent when interacting with a computing interface, allowing you to push virtual button simply by focusing on it,” the company explained in their blog post announcing the acquisition.23
Microsoft has obtained a patent on an EEG device that allows users to navigate web browsers and apps with their brains and will reward people with cryptocurrency for doing so.24 Neurable promises “the mind unlocked” with its “smart headphones for smarter focus.” And then there are Elon Musk’s Neuralink, Thomas Oxley’s Synchron, and Marcus Gerhardt’s Blackrock Neurotech (companies we’ll look at more closely in chapter 9), which are working on implantable neurotechnology that will be implanted inside our brains. These devices will be far more powerful than any existing wearable neurotechnology—powerful enough to achieve real-time thought and imagery decoding, which is way beyond the capabilities of existing consumer-grade EEG and EMG devices.
But whether worn on our scalps or wrists, or deeply embedded in our brains, all these devices share one striking commonality. Each records our raw neural activity—which can be saved, aggregated, and mined for much more than what consumers are using it for. The black box of our brains has been opened. Mark Zuckerberg is right. Neural interface is the “holy grail.”
Of data tracking by corporations.
“Raw” Brain Data Is Uniquely Sensitive
Suppose you keep a written diary and wish to share a passage from it with a friend. You hand it to them and ask them to read the highlighted passage. Your friend does so and hands it back. Now imagine instead that your friend also made a copy of your diary, which they keep in a file on their desk so they can return to it anytime they want to learn something new about you, whether you intended to share it or not.
Raw brain data captured by EEG, EMG, and other neurotechnology are similar. EEG, for example, records raw brain data—delta (slow waves), theta (medium), alpha (higher), beta (higher still), and gamma (the highest, at 30–80 Hz)—as well as the electrical activity of nearby muscles, electrode motion interference, and ambient noise.
This “raw” data is then fed through software that filters out artifacts and extraneous information, analyzes the brain waves, and picks out the relevant information to return to you. If brain activity is recorded and stored, that same raw brain data can be returned to time and again and mined to learn all kinds of additional insights about you—such as whether you are at risk of stroke or Alzheimer’s or ADHD. All without your knowledge.
While you can still choose to use or not use most consumer neurotechnology, once you do use it, you may be revealing far more than you intended: Blinking, the beating of our hearts, sweating—these are all automatic functions that neither require nor follow our conscious wills. More complex automatic brain functions include our visceral or emotional reactions to external events. A scary movie, a passionate kiss, or a painful burn will all evoke automatic reactions that occur outside our cognitive or “rational” thought processing, but that nonetheless leave traces in the brain.25
Even emotional states and biases can be decoded. When someone says something that is hurtful, you might choose to conceal your feelings. But your brain still registers them. You might be feeling bored and lonely in your relationship but aren’t ready to share that with your spouse. But if your spouse had access to your raw brain data and the tools to interpret it, your brain could give you away.26 You might work hard to combat your implicit biases while you are at work, but your subconscious still registers them.27 If you were wearing a consumer headset at the time, your biases could be decoded and made public.
Hackers could even install brain spyware into the apps and devices you are using. A research team led by UC Berkeley computer science professor Dawn Song tried this on gamers who were using neural interface to control a video game. As they played, the researchers inserted subliminal images into the game and probed the players’ unconscious brains for reaction to stimuli—like postal addresses, bank details, or human faces. Unbeknownst to the gamers, the researchers were able to steal information from their brains by measuring their unconscious brain responses that signaled recognition to stimuli, including a PIN code for one gamer’s credit card and their home address.28 Neural data could also be intercepted as it is sent to a paired cell phone if it isn’t well secured.29
“It’s happening somewhat faster than we thought,” says Howard Chizeck, a professor of electrical engineering at the University of Washington. Chizeck expects that millions of people will soon be playing online games while wearing brain–computer interface devices. The operator of the game could play Twenty Questions, and measure the automatic brain reactions to what the gamer sees. “I could flash pictures of [gay and straight] couples and see which ones you react to. And going through a logic tree, I could extract your sexual orientation,” Chizeck says. “I could show political candidates and begin to understand your political orientation, and then sell that to pollsters.” This kind of probing could be accomplished through spyware by a malicious actor but could just as easily be built into popular games and technologies, allowing the manufacturers to surreptitiously collect even more data about us.30 A recent report claims the Chinese government is already using cutting-edge AI and neurotechnology to analyze facial expressions and brain waves to see if a person is attentive to “thought and political education.”31
While other personal tracking data has proved strongly predictive of certain things—our purchasing behavior, for example—brain data can reveal our deepest-held feelings or biases, ones we ourselves may not yet have acknowledged. It can even be used to predict how agreeable or neurotic we are by looking at our alpha, beta, and theta bands!32 In the words of the philosopher Sarah Goering and neuroscientist Rafael Yuste, raw brain wave data “could provide access to highly intimate information that is proximal to one’s identity,” such as our political orientation, sexual orientation, or our tolerance or strategy for dealing with risk.33
Many people believe they are shielded from targeted misuse if their information is stripped of identifying information. But our brain patterns may be even more unique than our fingerprints.34
Copyright © 2023 by Nita A. Farahany. Copyright © 2024 by Nita A. Farahany