1THEY’RE HERE
In the early 1900s, the artist Oskar Kokoschka was the blond, rakishly handsome enfant terrible of the Viennese art scene. Known as much for his colorful personal life as for his expressionist paintings, the press referred to him as “the wildest beast of all.”
One of the most fateful encounters of his life came in 1912 when he met Alma Mahler, the beautiful, dark-haired widow of the famous composer Gustav Mahler. The recently bereaved Alma was one of the most desired women in all of Vienna, and the two of them plunged headfirst into a tumultuous love affair. Alma became Oskar’s artistic muse, and when they weren’t making love, he was compulsively drawing and painting her.
Soon, however, Oskar’s passion turned to obsession and, for Alma, his jealousy became a prison. After three stormy years, Alma broke off the affair, telling Oskar that she was afraid of becoming too overcome by passion. He was devastated and even more distraught to learn that she had aborted his baby, an act that he took as the ultimate rejection.
To try to win Alma’s admiration (and perhaps her sympathy), Oskar volunteered as an Austrian cavalryman in World War I. He was soon seriously wounded and was twice diagnosed with shell shock, an antiquated term for post-traumatic stress. While he was recovering in a hospital in Dresden, doctors decided that he was mentally unstable and had him discharged from the army. Oskar returned to Vienna only to discover that, rather than welcoming him with open arms, as he had hoped, Alma had taken up with another man. He was plunged into despair.
The year was 1918, and Oskar decided that if he couldn’t have Alma, he would have a life-size doll made in her likeness. He hired the Munich doll maker Hermine Moos and provided her with numerous detailed sketches and a life-size oil painting of Alma. He was exacting, even obsessive in his instructions to Moos, hoping for the most lifelike copy of Alma possible. He wrote to the doll maker:
Yesterday I sent a life-size drawing of my beloved and I ask you to copy this most carefully and to transform it into reality. Pay special attention to the dimensions of the head and neck, to the ribcage, the rump and the limbs. And take to heart the contours of the body, e.g., the line of the neck to the back, the curve of the belly. Please permit my sense of touch to take pleasure in those places where layers of fat or muscle suddenly give way to a sinewy covering of skin … The point of all this for me is an experience which I must be able to embrace!
Later, he asked, “Can the mouth be opened? Are there teeth and a tongue inside? I hope so.”1 As the months of making the doll progressed, it became clear that Oskar was not just hoping for a reminder of Alma; he sought to replace her altogether. In anticipation of his doll, and perhaps to make Alma jealous, Kokoschka had his maid spread rumors throughout Vienna that he had a new woman in his life, and curiosity about who this new woman could be abounded.
When the doll finally arrived, Oskar took his fetish to new levels. He took the doll for long carriage rides and rented a box at the Vienna opera house, where he seated her next to him. He sat with her at sidewalk cafés, where he carried on conversations with her. He even had his maid attend to her, dressing her in fine clothes and treating her like the lady of the house. “See,” he seemed to be saying to Alma, “I no longer need you because a beautiful doll can take your place!” He painted hundreds of portraits of the doll, over one hundred of which featured the two of them together, seemingly as a happy couple.
Kokoschka’s doll dissolved the boundary between life and art, allowing him to recede into a fantasy world that subsumed the reality of his love life. She also gave him an object on which to act out his unresolved feelings toward the real Alma. Eventually, he managed to do to the doll what he would have liked to do to his lost love. The end of the relationship was just as florid and dramatic as its commencement. He wrote:
I engaged a chamber orchestra from the Opera. The musicians, in formal dress, played in the garden, seated in a Baroque fountain whose waters cooled the warm evening air. A Venetian courtesan, famed for her beauty and wearing a very low-necked dress, insisted on seeing the Silent Woman face to face, supposing her to be a rival. She must have felt like a cat trying to catch a butterfly through a window-pane; she simply could not understand. Reserl paraded the doll as if at a fashion show; the courtesan asked whether I slept with the doll, and whether it looked like anyone I had been in love with … In the course of the party the doll lost its head and was doused in red wine. We were all drunk.2
The next morning, Kokoschka’s doll was found naked and beheaded in his garden. Apparently having worked out his issues with Alma, Kokoschka no longer needed her.
His bizarre acting out was extreme, but as often is the case with eccentricity, Kokoschka’s need to project his emotional issues onto an inanimate object is only a matter of degree, not kind. The doll allowed him to channel his obsessive feelings in a relatively harmless way; better that he beheaded a doll rather than murder his real-life love. Ultimately, the doll could not return his affections. It couldn’t speak or respond to his caresses or even feign emotion. Yet it played an integral part in his effort to resolve the issues he had yet to work out with the real Alma. Kokoschka went on to live a long, characteristically eccentric life and died in 1980.
A hundred years later, we have Pepper, the social robot. Pepper is not an erotic symbol but is rather in the form of an innocent, rather needy child. Made of metal and plastic, he is just under three feet tall and looks as much like a cartoon character as a robot. His large, round eyes peer out of a round head, giving him the air of an inquisitive toddler. When you speak to him, his eyes gaze up at your face, and he follows your movements with his head.
Interacting with Pepper is effortless because he takes the initiative. All you have to do is move and speak, and Pepper will start a conversation with you in his childlike voice. But Pepper can do far more than a mere doll. Not only does he respond to your speech, he “reads” your expressions to ascertain your emotional state and responds with what his makers deem is appropriate behavior. For example, if Pepper notices that you look sad, he’ll try to cheer you up by playing your favorite song. And if you don’t feel like hearing your favorite song, just tell him so in the natural language you would use to address a real person. The music will stop, but Pepper isn’t likely to give up. He’s on a mission, it seems, to make you feel better.
Created by the French robotics company Aldebaran, which is now owned by SoftBank, the Japanese telecommunications firm, Pepper is one of the first humanoid robots designed specifically for companionship. As rudimentary as he is, he’s light-years more advanced than Kokoschka’s inanimate doll.
He “learns” from interacting with humans and is able to develop a unique relationship with them by remembering their tastes, desires, and even their moods. His makers describe him as “engaging and friendly.” Some people might find the chatty robot, at least some of the time, a bit too friendly. However, that’s easily corrected without the need for any technical skills whatsoever. All you have to do to program Pepper is tell him what you want, and he’ll respond accordingly and even remember your words for future reference.
But Pepper is only part of the equation; human nature fills in the rest. Regardless of our individual needs, and thanks to our hardwired emotional natures, most of the time we’re tempted to interact with him as though he were human. Research has shown that, like it or not, robots like Pepper can push our emotional buttons and elicit responses from us that defy our rational awareness that they’re not actually living things.
Unlike his industrial cousins, Pepper is a consumer robot meant to live with people in their homes. Unfortunately, he doesn’t do windows or carpets or dishes. He’s designed solely to be a companion. Currently priced at $1,900, he’s relatively affordable, and so far each month, when one thousand Peppers go on sale in Japan, they sell out within minutes. Pepper’s creator, roboticist Kaname Hayashi, is unabashed in his hope that the talkative robot may be able to banish loneliness. “We all feel lonely,” Hayashi has said. “We lie if we say we don’t.”3 Pepper not only simulates emotions and behaves as though he has empathy for our feelings, he elicits empathy in us, a phenomenon that evokes both delight and trepidation.
Popular culture has long been replete with books, stories, comics, movies, and games whose central theme is one of robots usurping humans and taking over the world. It’s hard to say if Pepper, in particular, will take over the world, but there’s no doubt that some version of a companion robot will be coming soon to homes throughout the industrialized world.
For those who doubt that robots are about to become ubiquitous, consider that we’re already surrounded by them. Not only are robots used widely in manufacturing and shipping, they go inside volcanoes testing for toxic gases, clean up radioactive materials after nuclear accidents, sweep for undersea mines, collect intelligence and test for explosive devices for the military, and explore other planets. Robots developed for oceanographic research descend into depths where no human could go, and some are designed to swim like fish or swarm like insects. They help doctors make diagnoses by almost instantaneously analyzing huge amounts of medical information, determine people’s eligibility for Medicare and Medicaid, and even perform surgery. The Japanese have launched (literally) a robot that can repair the International Space Station, and it’s anticipated that robots will soon build research stations on other planets, supervised by astronauts in orbiting space stations. In addition to the work they’ll do, these robots will also provide entertainment and companionship to astronauts on long journeys into space. Despite a lack of fanfare to announce their proliferation, robots are already an integral part of modern life.
Police organizations throughout the world are already employing robots to do especially dangerous jobs or to go where police officers can’t go. The Ohio state police are using a six-wheeled robot to explore tight spaces in search of bombs, and in India, police riot-control drones can watch crowd behavior and, if needed, fire pepper spray, paint pellets, or tear gas. Israeli counterterrorism forces have a land-based rover that packs a 9 mm Glock pistol and can enter a house, maneuver over obstacles, and even climb stairs, all the while transmitting information to operators through cameras and a two-way radio. Japanese police forces have a flying drone that can locate and shoot down hostile drones, and the Greek coast guard has a robotic flotation device that can rescue drowning refugees who are struggling to cross the Mediterranean Sea. In the Democratic Republic of the Congo, more humanoid-looking robotic traffic cops direct traffic at busy intersections, and at a prison in South Korea, a five-foot-tall robot patrols hallways and uses pattern-recognition algorithms to detect problem behaviors on the part of prisoners.4
By adding more and more capabilities that allow them to interact with humans, roboticists are greatly expanding the roles robots can play in everyday life. The government of Switzerland is testing a small, six-wheeled robot that can read human handwriting and negotiate outdoor environments well enough to deliver the mail.5
Interactive robots now help travelers at several airports across the world. In the Geneva Airport, a boxy, autonomous robot named Leo can greet you, check in your bags and deliver them to the correct handling area, give you up-to-the-minute information about your flight and boarding gate, and direct you to locations like the nearest restroom or ATM.6 At Amsterdam Airport Schiphol, a freewheeling robot with a slightly more humanoid appearance named Spencer interacts with passengers by helping them navigate the airport. Spencer can communicate in several languages and understands human behavior well enough to know how to navigate around people even in a crowded terminal.7 At Narita International Airport in Japan, Honda’s walking, talking robot Asimo greets weary international travelers before they line up for customs and usually manages to put a smile on their faces. Sometimes Asimo’s greeting involves jumping and kicking a soccer ball, a performance that inevitably draws applause. Travelers who venture toward the Bank of Tokyo’s branch at the airport’s first terminal will encounter NAO, a humanoid robot similar to Pepper, who provides currency exchange rates and guidance to airport facilities in Japanese, Chinese, and English. The two-foot-high robot, who stands at eye level on a counter, “blinks” its eyes and makes lifelike gestures as it engages in conversation with curious people of all ages and nationalities.8
The most important capability of airport robots is to recognize and process language and respond appropriately to humans. In this respect, one might consider them to be not much more than interesting novelties, but with each passing day, roboticists, using specially written algorithms, are adding more advanced, humanlike capabilities.
Robots, some embodied and some not, are taking over human jobs in a wide array of professions. Starting in September 2016, the home improvement store Lowe’s began testing the LoweBot customer assistance robot in eleven of its San Francisco stores. The robot scans inventory and leads customers to whatever tool, appliance, or gadget they’re looking for simply by being asked. For those who feel uncomfortable talking to a robot, the LoweBot also has a touch screen for communication. Robots like the LoweBot are likely to become a common aspect of the shopping experience as they replace salespeople and cashiers. Officials for Lowe’s insist that their robot won’t lead to layoffs for their human customer service providers—it will free them of the most repetitive tasks so they can spend more one-on-one time with customers.9
But robots are able to do considerably more advanced tasks than just walking, talking, and processing language. Their abilities are growing at a rapid pace, and although we already depend on them in a host of online capacities, in many cases, the interaction is so seamless that we’re not even aware that a robot has played a role.
Many people are unaware that twenty-three million Twitter users are automated bots. If you follow a Twitter user named Olivia Taters, you’re following a tweet-generating bot that was designed to speak in the voice of a typical teenager, created by Rob Dubbin, a writer for the comedy show The Colbert Report. Dubbin also created a bot designed to churn out the kind of tweets sent by conservative news organizations, called Real Human Praise. Both of his brainchildren have thousands of followers, many of whom think they’re following the Twitterized stream of consciousness of real people.10
This phenomenon is a real problem for Twitter, whose stock value is based on its audience reach as an advertising platform. Potential advertisers want to feel certain that they’re reaching real people, and when they lose confidence in Twitter’s real audience reach, they decline to buy advertising and the company’s stock value falls. This has already happened more than once, when news about the huge number of Twitterbots came out. When Elon Musk was in negotiations to buy Twitter, the number of Twitterbots became such an issue that it almost scuttled the deal. But equally important, people who use Twitter are emotionally engaged because they believe they’re following and communicating with other people. The fact that automated bots are able to mimic the communication patterns of real humans is only the tip of the iceberg in human-robot interaction (HRI). Robots are already fooling us in myriad ways.
Robots used to be confined to replacing human workers in tasks that are dirty, dangerous, and repetitive. Although they still fill that role in many industries, they’re starting to fool us into thinking that we’re dealing with humans in a growing array of capacities. Some bots are now doing things that until very recently we thought only humans could do, tricking even the most technically savvy among us into thinking they’re dealing with a real person.
In 2016, Georgia Institute of Technology College of Computing professor Ashok Goel added a new teaching assistant named Jill Watson to help with his course in knowledge-based artificial intelligence. The online course is required for a master’s degree in computer science, and each year, about three hundred students post roughly ten thousand messages in online forums, mostly questions about the material being studied—far too many for Dr. Goel and his several teaching assistants to respond to. Goel added the new teaching assistant to answer student questions and provide them with feedback to keep them on the right track.
After being trained by reviewing about forty thousand past questions that students had actually asked, the new TA was soon able to answer questions with 97 percent accuracy, and Goel put her to work fielding student questions online. The students’ response was uniformly positive. These were postgraduate computer science students, and what none of them realized was that Jill Watson was actually an online bot whose interaction was so seamless that it was virtually indistinguishable from a real teaching assistant. Goel’s bot is currently handling 40 percent of his students’ online questions.11
The public has long been concerned about robots taking over jobs involving manual labor and thereby displacing human workers. This concern is well founded; robots have indeed taken over a large number of manufacturing jobs and are projected to take over many more, if not most of them, in the next few decades. What few realize is that they’re getting more and more adept at performing at near-human level in many white-collar jobs.
Fields like law and journalism are already being affected. In 2014, the highly respected news agency the Associated Press (AP) started publishing routine stories with no byline attached—that is, no human byline. The stories are being generated by “journalist” bots and read just like typical AP stories, only missing a subjective point of view. The fact is millions of articles by bots are being published each year not just by the AP but by large companies like Comcast and Yahoo. These publishers rely on bots produced by an article-generating company in Durham, North Carolina, called Automated Insights, and the vast majority of readers take for granted that they’re reading the work of traditional journalists. Not only have the bots mastered tone and style well enough to fool readers into thinking they’re human, they can generate up to two thousand articles per second.12 Journalist bots are especially useful in fields that require the crunching of large amounts of data, such as finance and sports reporting, and in 2015, Automated Insights produced 1.5 billion “narratives” used in published news accounts.
More recently, the chatbot ChatGPT, created by the company OpenAI, has been covered extensively in the news for its impressive ability to scan the internet for information and cobble together essays that sound, to many people, as though they’ve been written by real people. Many observers are troubled by the possibility of this technology eliminating a plethora of white-collar jobs. A spirited debate has emerged over the accuracy (not 100 percent) and even the sometimes aggressive and malevolent content ChatGPT generates in conversations with people. I’ll explore such phenomena in a later chapter.
Bots are quickly going beyond the written word and may soon be generating video journalism as well. The basic technology was unveiled in 2016 by two Georgia Tech computer science students, Daniel Castro and Vinay Bettadapura, and it utilized the elements of video editing that could easily fool one into thinking that a human editor was involved. The students wrote an algorithm that turned twenty-six hours of raw vacation video footage into a thirty-eight-second highlight video in just three hours.
The bot analyzed the raw footage for location, image composition, symmetry, and color vibrancy, scoring each frame. Then it selected the footage with the highest scores and even selected the most picturesque content. And the program, just like a real editor, can create videos that are highly reflective of the user’s tastes and desires. Bettadapura said, “We can tweak the weights in our algorithm based on the user’s aesthetic preferences. By incorporating facial recognition, we can further adapt the system to generate highlights that include people the user cares about.”13 This algorithm could easily be incorporated into the abilities of an embodied household robot and make the production of highly aesthetic, meaningful videos that highlight important events in an individual’s or a family’s life—such as a birthday party—as well as take on much of the workload for professional videographers.
Inventors keep pushing the limits of what machine intelligence can do, taking their inspiration from human beings. The direction of robot research is making robots do more and more things that we have always believed could only be done by humans.
Google Brain, Google’s artificial intelligence (AI) project, is trying to teach computers to be creative. In 2016, an initiative called the Magenta project fed four musical notes to a computer and “primed” it to write a piece of music. The result was a ninety-second song using the sound of a piano set to a rapid beat. The song is fairly repetitive but is not unlike any number of electronic tunes that make up the components of popular songs.14 The piece may not have anywhere near the complexity or nuance that a human composer could create, but one can easily see how such bot-generated tunes could be used in sampling by human DJs and songwriters, augmenting music as we know it. As the technology advances, it’s no longer so certain that humans will continue to be the only beings capable of producing music.
Google’s ambitions to teach machines to be creative doesn’t end with the Magenta project. It has created DeepDream, a program that sorts through, alters, and remixes photographs to create nightmarish, surreal works of art that are mesmerizing. Google uses a system mimicking human neural networks to create images that critics describe as hallucinogenic. Looking at the pictures, it’s hard to believe they weren’t done by an extremely imaginative artist.15
Adding “creative” algorithms to computer programs that can be incorporated into robots adds to the eerie impression that robots could have a bona fide inner life. This is especially powerful if such attributes are given to robots that are socially interactive because it invites us to have empathy for them and for their assumed inner lives. In fact, even without such humanlike cues, we’re already subject to feeling empathy for even simple robots.
As mentioned, people are hardwired to feel empathy for machines that display even the most rudimentary behavior, and we may even be hardwired to anthropomorphize, or project human attributes onto, the simplest of robots.
Floorance, Darth Roomba, Sarah, Alex, and Joe. These are some of the names people have given their Roombas, the simple, disklike robots that wander their homes vacuuming carpets and floors. When researchers at the Georgia Institute of Technology surveyed 379 Roomba owners to profile how early adopters of robot technology feel about their robots, they tapped into a cornucopia of anthropomorphic feelings about the Roomba.
More than half of the owners ascribed a gender to their Roombas, and about a third of them named them. A large percentage of them ascribed a personality to the humble appliance and talked to it, even praising it for doing a good job. Forty-three of them bought a costume for their Roombas and dressed them up as fictional characters.16 All this and the Roomba doesn’t come close to the level of sophistication of a social robot that can read and express emotions. The Georgia Tech study only highlights what psychologists already recognize: interactive robots are not really about robots—they’re mostly about us and the complicated emotions we bring to the human-robot interaction.
Copyright © 2023 by Eve Herold