0.
THE LEIBNIZ ARCHIPELAGO
From Analog to Digital and Back
In July 1716, Gottfried Wilhelm Leibniz, a seventy-year-old lawyer, philosopher, and mathematician whose “tragedy was that he met the lawyers before the scientists,” joined Peter the Great, the forty-four-year-old tsar of Russia, in taking the cure at Bad Pyrmont in Saxony, drinking mineral water instead of alcohol for the duration of their eight-day stay.1
Leibniz, who would be dead within the year, laid three grand projects before the tsar. First was a proposal to send an overland expedition across Siberia to the Kamchatka Peninsula and the Pacific, where one or more oceangoing vessels would be launched on a voyage of discovery to determine whether Asia and America were separated, and if so, where? What languages were spoken by the inhabitants, and could this shed light on the origins and evolution of the human race? Were the rivers navigable? How does the magnetic declination vary with location, and does it also vary in time? What lay between the Russian far east and the American Northwest? Could Russia extend its claims?
Second was a proposal to establish a Russian academy of sciences, modeled on the success of the existing European academies while leaving their infirmities behind.
Third was a plan to use digital computers “to work out, by an infallible calculus, the doctrines most useful for life, that is, those of morality and metaphysics,” by encoding natural language and its underlying concepts through a numerical mapping to an alphabet of primes.2 Leibniz sought Peter’s support to introduce this calculus ratiocinator to China, whose philosophers he credited with the invention of binary arithmetic, and to adopt this system in the tsar’s campaign for the modernization and expansion of Russia, which Leibniz saw as a tabula rasa, or blank slate, upon which his vision of a rational society based on science, logic, and machine intelligence might play out.
“The human race will have a new kind of instrument which will increase the power of the mind much more than optical lenses strengthen the eyes,” he argued. “Reason will be right beyond all doubt only when it is everywhere as clear and certain as only arithmetic has been until now.”3 Leibniz observed that the functions of binary arithmetic correspond to the logical operations of “and,” “or,” and “not.” Strings of binary symbols, whether represented by zeros and ones or black and white marbles, could both encode and logically manipulate concepts of arbitrary complexity in unambiguous terms. This universal language would open a new era in human affairs. Leibniz saw Peter’s ambitions as the means to propagate this revolution, drawing the analogy that building a new structure is easier than remodeling an old one whose foundations have settled unevenly, leaving defects that have to be repaired.
The Russian Academy of Sciences was founded in 1724. The Great Northern Expedition was launched in 1725, followed by a 126-year Russian presence in America, beginning with the arrival of Bering and Chirikov in 1741 and ending in 1867 with the transfer of Alaska to the United States. Leibniz’s third project received no support. Although “so amused that he had looked at the instrument for half an hour,” and even probed it with a pencil to see how it worked, Peter took no further interest in Leibniz’s mechanical computer.4 The powers of digital computing were lost on the tsar.
* * *
It took another two centuries, and the invention of electronics, for Russia, China, and the rest of the world to become the tabula rasa of Leibniz’s plan. Then suddenly, in less than fifty years, we advanced from the first primitive electronic digital computers, assembled from vacuum tubes and exchanging coded sequences at the speed of punched cards and paper tape, to a world where code proliferates at the speed of light. The ability of digital computers to mirror the non-digital universe is taken for granted today. To question the supremacy of these powers elicits the same disbelief as trying to explain them did in the time of Peter the Great.
The differences between analog computing and digital computing are fundamental but not absolute. Analog computation deals with continuous functions, whose values change smoothly over time. Digital computation deals with discrete functions, whose values change in precise increments from one instant to the next. Leibniz might have envisioned an analog computer operating by means of a fluid running through a maze of pipes, regulated by valves that could be varied continuously between fully open and fully closed. As one of the founders of the infinitesimal calculus, he was no stranger to the continuous functions that such a device could evaluate or control. Instead, he envisioned a digital computer, with binary arithmetic executed by marbles shifted by on/off gates as they ran along multiple tracks.
These marbles were either black or white; no shades of gray allowed. They could not be divided into smaller marbles or merged into marbles of larger size. When arriving at a gate, they had to follow either one path or the other, with no middle ground. Any given sequence of marbles either corresponded exactly to some other sequence or did not. All questions had to be stated unambiguously, and if a question was repeated, the answer would be the same every time. This imagined computer was never built, but the binary digits, or bits, that permeate every facet of our existence are Leibniz’s marbles, given electronic form.
Nature uses digital coding, embodied in strings of DNA, for the storage, replication, modification, and error correction of instructions conveyed from one generation to the next, but relies on analog coding and analog computing, embodied in brains and nervous systems, for real-time intelligence and control. Coded sequences of nucleotides store the instructions to grow a brain, but the brain itself does not operate, like a digital computer, by storing and processing digital code. “If the only demerit of the digital expansion system were its greater logical complexity, nature would not, for this reason alone, have rejected it,” argued John von Neumann in 1948, explaining why brains do not use digital code.5
In a digital computer, one thing happens at a time. In an analog computer, everything happens at once. Brains process three-dimensional maps continuously, instead of processing one-dimensional algorithms step by step. Information is pulse-frequency coded, embodied in the topology of what connects where, not digitally coded by precise sequences of logical events. “The nervous system of even a very simple animal contains computing paradigms that are orders of magnitude more effective than are those found in systems built by humans,” argued Carver Mead, a pioneer of the digital microprocessor, urging a reinvention of analog processing in 1989.6 Technology will follow nature’s lead in the evolution of true artificial intelligence and control.
Electronics underwent two critical transitions over the past one hundred years: from analog to digital and from high-voltage, high-temperature vacuum tubes to silicon’s low-voltage, low-temperature solid state. That these transitions occurred together does not imply a necessary link. Just as digital computation was first implemented using vacuum tube components, analog computation can be implemented, from the bottom up, by solid state devices produced the same way we make digital microprocessors today, or from the top down through the assembly of digital processors into analog networks that treat the flow of bits not logically but statistically: the way a vacuum tube treats the flow of electrons, or a neuron treats the flow of pulses in a brain.
Leibniz’s digital universe, despite its powers, remains incomplete, just as Isaac Newton, his rival over credit for the invention of the calculus, gave us a mathematical description of nature that predicts everything correctly, but only up to a certain point. The next revolution will be the coalescence of programmable machines into systems beyond programmable control.
* * *
There are four epochs, so far, in the entangled destinies of nature, human beings, and machines. In the first, preindustrial epoch, technology was limited to the tools and structures that humans could create with their own hands. Nature remained in control.
In the second, industrial epoch, machines were introduced, starting with simple machine tools, that could reproduce other machines. Nature began falling under mechanical control.
In the third epoch, digital codes, starting with punched cards and paper tape, began making copies of themselves. Powers of self-replication and self-reproduction that had so far been the preserve of biology were taken up by machines. Nature seemed to be relinquishing control. Late in this third epoch, the proliferation of networked devices, populated by metazoan codes, took a different turn.
In the fourth epoch, so gradually that almost no one noticed, machines began taking the side of nature, and nature began taking the side of machines. Humans were still in the loop but no longer in control. Faced with a growing sense of this loss of agency, people began to blame “the algorithm,” or those who controlled “the algorithm,” failing to realize there no longer was any identifiable algorithm at the helm. The day of the algorithm was over. The future belonged to something else.
A belief that artificial intelligence can be programmed to do our bidding may turn out to be as unfounded as a belief that certain people could speak to God, or that certain other people were born as slaves. The fourth epoch is returning us to the spirit-laden landscape of the first: a world where humans coexist with technologies they no longer control or fully understand. This is where the human mind took form. We grew up, as a species, surrounded by mind and intelligence everywhere we looked. Since the dawn of technology, we were on speaking terms with our tools. Intelligence in the cloud is nothing new. To adjust to life in the fourth epoch, it helps to look back to the first.
The beginning of this book is set at the close of the first epoch, the ending is set at the opening of the fourth, and the second and third epochs fall in between. The following chapters illuminate these transitions from a range of viewpoints over the past three hundred years. What drove the convergence of Leibniz’s dreams of a digital universe with his mission to explore the American Northwest Coast? How did the two movements originate, and what led them to intersect? What are the differences between analog computing and digital computing, and why does this matter to a world that appears to have left analog computation behind? To someone who grew up in the third epoch but was drawn to the ways of the first, how to reconcile the distinction, enforced by the American educational system, between those who make a living with their minds and those who make a living with their hands? In an age that celebrates the digital revolution, what about those who fought for the other side?
* * *
The Bering-Chirikov expedition reached North America in 1741. The Russians, met by an indigenous population without written language but with advanced technology and arts, left a record of the Northwest Coast and its inhabitants at the moment that precontact times came to an end. Fifteen members of the expedition went ashore and were left behind. Their fate remains unknown.
At the close of the nineteenth century, the Chiricahua Apaches, descended from onetime Alaskans arriving from Asia who continued south, resisted subjugation to a later date than anyone else. In pursuit of the last of the Apaches, the U.S. government implemented the first large-scale high-speed all-optical digital telecommunications network in North America. The first shots in the digital revolution and the last bows and arrows deployed in war against regular soldiers of the U.S. Army were fired at the same time.
The invention of the vacuum tube, or thermionic valve, enabled machines with no moving parts except electrons, their operation limited not by the speed of sound that governs the transmission of information in mechanical devices but by the speed of light. It was into the war-surplus ferment of the electronics industry that otherwise abstract contributions from theoretical physics and mathematical logic combined to realize Leibniz’s vision of binary arithmetic as a universal code. The vacuum tube, treating streams of electrons as continuous functions, was an analog device. The logical processing of discrete pulses of electrons had to be imposed upon it, against its will, until the advent of the transistor brought this age of reptiles to a close.
The Hungarian physicist Leo Szilard, after helping to invent nuclear weapons, spent the rest of his life opposing them—except for their use in the exploration of space. This possibility was taken up by a privately organized, government-supported project whose mission was to reach Saturn by 1970 in a four-thousand-ton spaceship carrying one hundred people on a voyage modeled after that of Darwin’s Beagle, allowing four years to complete the trip. Project Orion was abandoned by the U.S. government, while Szilard’s fictional Voice of the Dolphins led to my own adventures on the Northwest Coast.
Three of those years were spent in a tree house ninety-five feet up in a Douglas fir above Burrard Inlet in British Columbia, on land that had never been ceded by its rightful owners, the Tsleil-Waututh. Trees integrate a range of continuous inputs into a single channel of digital output: growth rings that are incremented one year at a time. I was surrounded by growth rings going back to the year 1426.
My own version of string theory holds that lashing and sewing are overlooked drivers of our technological advance. On the Northwest Coast, Russian American colonists adopted the indigenous technology of the skin-boat builders rather than replacing it with something else. I took the Russian adoption of the Aleut kayak, or baidarka, as a model, not only for my own boatbuilding but also for how technology is emulating the design and tactics of living organisms on all fronts.
Samuel Butler’s “Darwin Among the Machines,” appearing out of nowhere in the New Zealand wilderness of 1863, was fleshed out into his prophetic, dystopian Erewhon of 1872. In his notes for Erewhon Revisited, Butler went on to warn us that the advance of artificial intelligence would be driven by advertising and that both God and Darwin might turn out to be on the side of the machines.
The optically transmitted intelligence and numbered identity tags of the nineteenth-century campaign against the Apaches have descended to the optically fed data center recently established in the nearby desert by the National Security Agency. In the analog universe, time is a continuum. In the digital universe, time is an illusion conveyed by sequences of discrete, timeless steps. No time is there. What happens to Leibniz’s vision of a digital enlightenment when all human activity is machine-readable and all steps can be retraced?
In 1890, after the exile of the Chiricahua Apaches as prisoners of war to Florida, a vision received by the Paiute prophet Wovoka led to a grassroots movement among the North American First Nations, promising to bring their dead warriors and dying ways back to life. An analogous prophecy, conveyed through a mathematical conjecture known as the continuum hypothesis, suggests that the powers of analog computation, transcending the bounds of algorithms, or step-by-step procedures, will supervene upon the digital and reassert control. Electrons, treated digitally, became bits; bits, treated statistically, have become electrons. The ghost of the vacuum tube has returned.
* * *
Leibniz’s ideas arrived in North America twice: in the twentieth century with the digital computer, and with the Bering-Chirikov expedition in the eighteenth. When the navigators following Peter’s instructions reached the American Northwest Coast, they were met by people who had been doing just fine since the last technological revolution, some fifteen thousand years earlier.
The slate was not blank.
Copyright © 2020 by George Dyson