1. THE GREAT WORM
When Robert Morris Jr. released his worm at 8:00 p.m., he had no idea that he might have committed a crime. His concern that night was backlash from fellow geeks: many UNIX administrators would be furious when they found out what he had done. As Cliff Stoll, a computer security expert at Harvard, told The New York Times: “There is not one system manager who is not tearing his hair out. It’s causing enormous headaches.” When the worm first hit, administrators did not know why it had been launched and what damage it was causing. They feared the worst—that the worm was deleting or corrupting the files on the machines it had infected. (It wasn’t, as they would soon discover.)
After he first confessed the “fuckup” to his friend Paul Graham, Robert knew he had to do something. Unfortunately, he could not send out any warning emails because Dean Krafft had ordered the department machines to be disconnected from the main campus network and hence the public internet. At 2:30 a.m., Robert called Andy Sudduth, the system administrator of Harvard’s Aiken Computation Lab, and asked him to send a warning message to other administrators with instructions on how to protect their networks. Though not ready to out himself, Robert also wanted to express remorse for the trouble he was causing. Andy sent out the following message:
From: foo%[email protected]
To: tcp-ip@SRI-NIC
Date: Thu 03:34:13 03/11/1988 EST
Subject: [no subject]
A Possible virus report:
There may be a virus loose on the internet.
Here is the gist of a message Igot:
I’m sorry.
Here are some steps to prevent further transmission:
1) don’t run fingerd, or fix it to not overrun its stack when reading arguments.
2) recompile sendmail w/o DEBUG defined
3) don’t run rexecd
Hope this helps, but more, I hope it is a hoax.
Andy knew the worm wasn’t a hoax and did not want the message traced back to him. He had spent the previous hour devising a way to post the message anonymously, deciding to send it from Brown University to a popular internet Listserv using a fake username (foo%bar.arpa). Waiting until 3:34 a.m. was unfortunate. The worm spawned so swiftly that it crashed the routers managing internet communication. Andy’s message was stuck in a digital traffic jam and would not arrive at its destination for forty-eight hours. System administrators had to fend for themselves.
On November 3, as the administrators were surely tearing their hair out, Robert stayed at home in Ithaca doing his schoolwork and staying off the internet. At 11:00 p.m., he called Paul for an update. Much to his horror, Paul reported that the internet worm was a media sensation. It was one of the top stories on the nightly news of each network; Robert had been unaware, since he did not own a television. Newspapers called around all day trying to discover the culprit. The New York Times reported the story on the front page, above the fold. When asked what he intended to do, Robert replied, “I don’t have any idea.”
Ten minutes later, Robert knew: he had to call the chief scientist in charge of cybersecurity at the National Security Agency (NSA). So he picked up the phone and dialed Maryland. A woman answered the line. “Can I talk to Dad?” Robert asked.
The Ancient History of Cybersecurity
Security experts had been anticipating cyberattacks for a long time—even before the internet was invented. The NSA organized the first panel on cybersecurity in 1967, two years before the first link in the ARPANET, the prototype for the internet, was created. It was so long ago that the conference was held in Atlantic City … unironically.
The NSA’s concern grew with the evolution of computer systems. Before the 1960s, computers were titanic machines housed in their own special rooms. To submit a program—known as a job—a user would hand a stack of punch cards to a computer operator. The operator would collect these jobs in “batches” and put them all through a card reader. Another operator would take the programs read by the reader and store them on large magnetic tapes. The tapes would then feed this batch of programs into the computer, often in another room connected by phone lines, for processing by yet another operator.
In the era of batch processing, as it was called, computer security was quite literal: the computer itself had to be secured. These hulking giants were surprisingly delicate. The IBM 7090, filling a room the size of a soccer field at MIT’s Computation Center, was composed of thousands of fragile vacuum tubes and miles of intricately wound copper strands. The tubes radiated so much heat that they constantly threatened to melt the wires. MIT’s computer room had its own air-conditioning system. These “mainframe” computers—probably named so because their circuitry was stored on large metal frames that swung out for maintenance—were also expensive. The IBM 7094 cost $3 million in 1963 (roughly $30 million in 2023 dollars). IBM gave MIT a discount, provided that they reserved eight hours a day for corporate business. IBM’s president, who sailed yachts on Long Island Sound, used the MIT computer for race handicapping.
Elaborate bureaucratic rules governed who could enter each of the rooms. Only certain graduate students were permitted to hand punch cards to the batch operator. The bar for entering the mainframe room was even higher. The most important rule of all was that no one was to touch the computer itself, except for the operator. A rope often cordoned it off for good measure.
In the early days of computing, then, cybersecurity meant protecting the hardware, not the software—the computer, not the user. After all, there was little need to protect the user’s code and data. Because the computer ran only one job at a time, users could not read or steal one another’s information. By the time someone’s job ran on the computer, the data from the previous user was gone.
Users, however, hated batch processing with the passion of a red-hot vacuum tube. Programmers found it frustrating to wait until all of the jobs in the batch were finished to get their results. Worse still, to rerun the program, with tweaks to code or with different data, meant getting back in the queue and waiting for the next batch to run. It would take days just to work out simple bugs and get programs working. Nor could programmers interact with the mainframe. Once punch cards were submitted to the computer operator, the programmers’ involvement was over. As the computer pioneer Fernando “Corby” Corbató, described it, batch processing “had all the glamour and excitement of dropping one’s clothes off at a laundromat.”
Corby set out to change that. Working at MIT in 1961 with two other programmers, he developed the CTSS, the Compatible Time-Sharing System. CTSS was designed to be a multiuser system. Users would store their private files on the same computer. All would run their programs by themselves. Instead of submitting punch cards to operators, each user had direct access to the mainframe. Sitting at their own terminal, connected to the mainframe by telephone lines, they acted as their own computer operator. If two programmers submitted jobs at the same time, CTSS would play a neat trick: it would run a small part of job 1, run a small part of job 2, and switch back to job 1. It would shuttle back and forth until both jobs were complete. Because CTSS toggled so quickly, users barely noticed the interleaving. They were under the illusion that they had the mainframe all to themselves. Corby called this system “time-sharing.” By 1963, MIT had twenty-four time-sharing terminals, connected via its telephone system, to its IBM 7094.
Hell, as Jean-Paul Sartre famously wrote, is other people. And because CTSS was a multiuser system, it created a kind of cybersecurity hell. While the mainframes were now safe because nobody needed to touch the computer, card readers, or magnetic tapes to run their programs, those producing or using these programs were newly vulnerable.
A time-sharing system works by loading multiple programs into memory and quickly toggling between jobs to provide the illusion of single use. The system places each job in different parts of memory—what computer scientists call “memory spaces.” When CTSS toggled between jobs, it would switch back and forth between memory spaces. Though loading multiple users’ code and data on the same computer optimized precious resources, it also created enormous insecurity. Job #1, running in one memory space, might try to access the code or data in Job #2’s memory space.
By sharing the same computer system, user information was now accessible to prying fingers and eyes. To protect the security of their code and data, CTSS gave each user an account secured by a unique “username” and a four-letter “password.” Users that logged in to one account could only access code or information in the corresponding address space; the rest of the computer’s memory was off-limits. Corby picked passwords for authentication to save room; storing a four-letter password used less precious computer memory than an answer to a security question like “What’s your mother’s maiden name?” The passwords were kept in a file called UACCNT.SECRET.
In the early days of time-sharing, the use of passwords was less about confidentiality and more about rationing computing time. At MIT, for example, each user got four hours of computing time per semester. When Allan Scherr, a PhD researcher, wanted more time, he requested that the UACCNT.SECRET file be printed out. When his request was accepted, he used the password listing to “borrow” his colleagues accounts. Another time, a software glitch displayed every user’s password, instead of the log-in “Message of the Day.” Users were forced to change their passwords.
From Multics to UNIX
Though limited in functionality, CTSS demonstrated that time-sharing was not only technologically possible, but also wildly popular. Programmers liked the immediate feedback and the ability to interact with the computer in real time. A large team from MIT, Bell Labs, and General Electric, therefore, decided to develop a complete multiuser operating system as a replacement for batch processing. They called it Multics, for Multiplexed Information and Computing Service.
The Multics team designed its time-sharing with security in mind. Multics pioneered many security controls still in use today—one of which was storing passwords in garbled form so that users couldn’t repeat Allan Scherr’s simple trick. After six years of development, Multics was released in 1969.
The military saw potential in Multics. Instead of buying separate computers to handle unclassified, classified, secret, and top-secret information, the Pentagon could buy one and configure the operating system so that users could access only information for which they had clearance. The military estimated that it would save $100 million by switching to time-sharing.
Before the air force purchased Multics, they tested it. The test was a disaster. It took thirty minutes to figure out how to hack into Multics, and another two hours to write a program to do it. “A malicious user can penetrate the system at will with relatively minimal effort,” the evaluation concluded.
The research community did not love Multics either. Less concerned with its bad security, computer scientists were unhappy with its design. Multics was complicated and bloated—a typical result of decision by committee. In 1969, part of the Multics group broke away and started over. This new team, led by Dennis Ritchie and Ken Thompson, operated out of an attic at Bell Labs using a spare PDP-7, a “minicomputer” built by the Digital Equipment Corporation (DEC) that cost ten times less than an IBM mainframe.
The Bell Labs team had learned the lesson of Multics’ failure: Keep it simple, stupid. Their philosophy was to build a new multiuser system based on the concept of modularity: every program should do one thing well, and, instead of adding features to existing programs, developers should string together simple programs to form “scripts” that can perform more complex tasks. The name UNIX began as a pun: because early versions of the operating system supported only one user—Ken Thompson—Peter Neumann, a security researcher at Stanford Research International, joked that it was an “emasculated Multics,” or “UNICS.” The spelling was eventually changed to UNIX.
UNIX was a massive success when the first version was completed in 1971. The versatile operating system attracted legions of loyalists with an almost cultish devotion and quickly became standard in universities and labs. Indeed, UNIX has since achieved global domination. Macs and iPhones, for example, run on a direct descendant of Bell Labs’ UNIX. Google, Facebook, Amazon, and Twitter servers run on Linux, an operating system that, as its name suggests, is explicitly modeled after UNIX (though for intellectual-property reasons was rewritten with different code). Home routers, Alexa speakers, and smart toasters also run Linux. For decades, Microsoft was the lone holdout. But in 2018, Microsoft shipped Windows 10 with a full Linux kernel. UNIX has become so dominant that it is part of every computer system on the planet.
As Dennis Ritchie admitted in 1979, “The first fact to face is that UNIX was not developed with security, in any realistic sense, in mind; this fact alone guarantees a vast number of holes.” Some of these vulnerabilities were inadvertent programming errors. Others arose because UNIX gave users greater privileges than they strictly needed, but made their lives easier. Thompson and Ritchie, after all, built the operating system to allow researchers to share resources, not to prevent thieves from stealing them.
The downcode of UNIX, therefore, was shaped by the upcode of the research community—an upcode that included the competition for easy-to-use operating systems, distinctive cultural norms of scientific research, and the values that Thompson and Ritchie themselves held. All of these factors combined to make an operating system that prized convenience and collaboration over safety—and the vast number of security holes left some to wonder whether UNIX, which had conquered the research community, might one day be attacked.
WarGames
In 1983, the polling firm Louis Harris & Associates reported that only 10 percent of adults had a personal computer at home. Of those, 14 percent said they used a modem to send and receive information. When asked, “Would your being able to send and receive messages from other people … on your own home computer be very useful to you personally?” 45 percent of those early computer users said it would not be very useful.
Americans would soon learn about the awesome power of computer networking. The movie WarGames, released in 1983, tells the story of David Lightman, a suburban teenager played by Matthew Broderick, who spends most of his time in his room, unsupervised by his parents and on his computer, like a nerdy Ferris Bueller. To impress his love interest, played by Ally Sheedy, he hacks into the school computer and changes her grade from a B to an A. He also learns how to find computers with which to connect via modem by phoning random numbers—a practice now known as war-dialing (after the movie). David accidentally war-dials the Pentagon’s computer system. Thinking he has found an unreleased computer game, David asks the program, named Joshua, to play a war scenario. When Joshua responds, “Wouldn’t you prefer a nice game of chess?” David tells Joshua, “Let’s play Global Thermonuclear War.” David, however, is not playing a game—Joshua is a NORAD computer and controls the U.S. nuclear arsenal. By telling Joshua to arm missiles and deploy submarines, David’s hacking brings the world to the nuclear brink. The movie ends when David stops the “game” before it’s too late. Joshua, the computer program, wisely concludes, “The only winning move is not to play.”
WarGames grossed $80 million at the box office and was nominated for three Academy Awards. The movie introduced Americans not only to cyberspace, but to cyber-insecurity, as well. The press pursued this darker theme by wondering whether a person with a computer, telephone, and modem—perhaps even a teenager—could hack into military computers and start World War III.
All three major television networks featured the movie on their nightly broadcasts. ABC News opened their report by comparing WarGames to Stanley Kubrick’s Cold War comedy, Dr. Strangelove. Far from being a toy for bored suburban teenagers, the internet, the report suggested, was a doomsday weapon capable of starting nuclear Armageddon. In an effort to reassure the public, NORAD spokesperson General Thomas Brandt told ABC News that computer errors as portrayed in the film could not occur. In these systems, Brandt claimed, “Man is in the loop. Man makes decisions. At NORAD, computers don’t make decisions.” Even though NBC News described the film as having “scary authenticity,” it concluded by advising “all you computer geniuses with your computers and modems and autodialers” to give up. “There’s no way you can play global thermonuclear war with NORAD, which means the rest of us can relax and enjoy the film.”
Not everyone was reassured. President Ronald Reagan had seen the movie at Camp David and was disturbed by the plot. In the middle of a meeting on nuclear missiles and arms control attended by the Joint Chiefs of Staff, the secretaries of state, defense, and treasury, the national security staff, and sixteen powerful lawmakers from Congress, Reagan interrupted the presentation and asked the room whether anyone had seen the movie. None had—it opened just the previous Friday. Reagan, therefore, launched into a detailed summary of the plot. He then turned to General John Vessey Jr., chairman of the Joint Chiefs, and asked, “Could something like this really happen?” Vessey said he would look into it.
Copyright © 2023 by Scott J. Shapiro