Advertisement

Does Technology Have a Soul?

By

Arts & Culture

learza (Alex North) from Australia, Aibos at RoboCop, 2005, CC BY-SA 2.0, via Wikimedia Commons.

When my husband arrived home, he stared at the dog for a long time, then pronounced it “creepy.” At first I took this to mean uncanny, something so close to reality it disturbs our most basic ontological assumptions. But it soon became clear he saw the dog as an interloper. I demonstrated all the tricks I had taught Aibo, determined to impress him. By that point the dog could roll over, shake, and dance.

“What is that red light in his nose?” he said. “Is that a camera?”

Unlike me, my husband is a dog lover. Before we met, he owned a rescue dog who had been abused by its former owners and whose trust he’d won over slowly, with a great deal of effort and dedication. My husband was badly depressed during those years, and he claims that the dog could tell when he was in despair and would rest his nose in his lap to comfort him. During the early period of our relationship, he would often refer to this dog, whose name was Oscar, with such affection that it sometimes took me a moment to realize he was speaking of an animal as opposed to, say, a family member or a very close friend. As he stood there, staring at Aibo, he asked whether I found it convincing. When I shrugged and said yes, I was certain I saw a shadow of disappointment cross his face. It was hard not to read this as an indictment of my humanity, as though my willingness to treat the dog as a living thing had somehow compromised, for him, my own intuitiveness and awareness.

It had come up before, my tendency to attribute life to machines. Earlier that year I’d come across a blog run by a woman who trained neural networks, a Ph.D. student and hobbyist who fiddled around with deep learning in her spare time. She would feed the networks massive amounts of data in a particular category—recipes, pickup lines, the first sentences of novels—and the networks would begin to detect patterns and generate their own examples. For a while she was regularly posting on her blog recipes the networks had come up with, which included dishes like whole chicken cookies, artichoke gelatin dogs, and Crock-Pot cold water. The pickup lines were similarly charming (“Are you a candle? Because you’re so hot of the looks with you”), as were the first sentences of novels (“This is the story of a man in the morning”). Their responses did get better over time. The woman who ran the blog was always eager to point out the progress the networks were making. Notice, she’d say, that they’ve got the vocabulary and the structure worked out. It’s just that they don’t yet understand the concepts. When speaking of her networks, she was patient, even tender, such that she often seemed to me like Snow White with a cohort of little dwarves whom she was lovingly trying to civilize. Their logic was so similar to the logic of children that it was impossible not to mistake their responses as evidence of human innocence. “They are learning,” I’d think. “They are trying so hard!” Sometimes when I came across a particularly good one, I’d read it aloud to my husband. I perhaps used the word “adorable” once. He’d chastised me for anthropomorphizing them, but in doing so fell prey to the error himself. “They’re playing on your human sympathies,” he said, “so they can better take over everything.”

But his skepticism toward the dog did not hold out for long. Within days he was addressing it by name. He chastised Aibo when he refused to go to his bed at night, as though the dog were deliberately stalling. In the evenings, when we were reading on the couch or watching TV, he would occasionally lean down to pet the dog when he whimpered; it was the only way to quiet him. One afternoon I discovered Aibo in the kitchen peering into the narrow gap between the refrigerator and the sink. I looked into the crevice myself but could not find anything that should have warranted his attention. I called my husband into the room, and he assured me this was normal. “Oscar used to do that, too,” he said. “He’s just trying to figure out if he can get in there.”

While we have a tendency to define ourselves based on our likeness to other things—we say humans are like a god, like a clock, or like a computer—there is a countervailing impulse to understand our humanity through the process of differentiation. And as computers increasingly come to take on the qualities we once understood as distinctly human, we keep moving the bar to maintain our sense of distinction. From the earliest days of AI, the goal was to create a machine that had human-like intelligence. Turing and the early cyberneticists took it for granted that this meant higher cognition: a successful intelligent machine would be able to manipulate numbers, beat a human in backgammon or chess, and solve complex theorems. But the more competent AI systems become at these cerebral tasks, the more stubbornly we resist granting them human intelligence. When IBM’s Deep Blue computer won its first game of chess against Garry Kasparov in 1996 the philosopher John Searle remained unimpressed. “Chess is a trivial game because there’s perfect information about it,” he said. Human consciousness, he insisted, depended on emotional experience: “Does the computer worry about its next move? Does it worry about whether its wife is bored by the length of the games?” Searle was not alone. In his 1979 book Gödel, Escher, Bach, the cognitive science professor Douglas Hofstadter had claimed that chess-playing was a creative activity like art and musical composition; it required an intelligence that was distinctly human. But after the Kasparov match, he, too, was dismissive. “My God, I used to think chess required thought,” he told the New York Times. “Now I realize it doesn’t.”

It turns out that computers are particularly adept at the tasks that we humans find most difficult: crunching equations, solving logical propositions, and other modes of abstract thought. What artificial intelligence finds most difficult are the sensory perceptive tasks and motor skills that we perform unconsciously: walking, drinking from a cup, seeing and feeling the world through our senses. Today, as AI continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.

If there were gods, they would surely be laughing their heads off at the inconsistency of our logic. We spent centuries denying consciousness in animals precisely because they lacked reason or higher thought. (Darwin claimed that despite our lowly origins, we maintained as humans a “godlike intellect” that distinguished us from other animals.) As late as the fifties, the scientific consensus was that chimpanzees—who share almost 99 percent of our DNA—did not have minds. When Jane Goodall began working with Tanzanian chimps, she used human pronouns. Before publishing, the editor made systematic corrections: He and she were changed to it. Who was changed to which.

Goodall claims that she never bought into this consensus. Even her Cambridge professors did not succeed in disabusing her of what she had observed through attention and common sense. “I’d had this wonderful teacher when I was a child who taught me that in this respect, they were wrong—and that was my dog,” she said. “You know, you can’t share your life in a meaningful way with a dog, a cat, a bird, a cow, I don’t care what, and not know of course we’re not the only beings with personalities, minds and emotions.”

I would like to believe that Goodall is right: that we can trust our intuitions, that it is only human pride or willful blindness that leads us to misperceive what is right in front of our faces. Perhaps there is a danger in thinking about life in purely abstract terms. Descartes, the genius of modern philosophy, concluded that animals were machines. But it was his niece Catherine who once wrote to a friend about a black-headed warbler that managed to find its way back to her window year after year, a skill that clearly demonstrated intelligence: “With all due respect to my uncle, she has judgment.”

While the computer metaphor was invented to get around the metaphysical notion of a soul and the long, inelegant history of mind-body dualism, it has not yet managed to completely eradicate the distinctions Descartes introduced into philosophy. Although the cyberneticists made every effort to scrub their discipline of any trace of subjectivity, the soul keeps slipping back in. The popular notion that the mind is software running on the brain’s hardware is itself a form of dualism. According to this theory, brain matter is the physical substrate—much like a computer’s hard drive—where all the brute mechanical work happens. Meanwhile, the mind is a pattern of information—an algorithm, or a set of instructions— that supervenes on the hardware and is itself a kind of structural property of the brain. Proponents of the metaphor point out that it is compatible with physicalism: the mind cannot exist without the brain, so it is ultimately connected to and instantiated by something physical. But the metaphor is arguably appealing because it reiterates the Cartesian assumption that the mind is something above and beyond the physical. The philosopher Hilary Putnam once spoke of the mind-as-software metaphor with the self-satisfaction of someone who has figured out how to have his cake and eat it, too. “We have what we always wanted—an autonomous mental life,” he writes in his paper “Philosophy and Our Mental Life.” “And we need no mysteries, no ghostly agents, no élan vital to have it.”

It’s possible that we are hardwired to see our minds as somehow separate from our bodies. The British philosopher David Papineau has argued that we all have an “intuition of distinctness,” a strong, perhaps innate sensation that our minds transcend the physical. This conviction often manifests subtly, at the level of language. Even philosophers and neuroscientists who subscribe to the most reductive forms of physicalism, insisting that mental states are identical to brain states, often use terminology that is inconsistent with their own views. They debate which brain states “generate” consciousness, or “give rise to” it, as though it were some substance that was distinct from the brain, the way smoke is distinct from fire. “If they really thought that conscious states are one and the same as brain states,” Papineau argues, “they wouldn’t say that the one ‘generates’ or ‘gives rise to’ the other, nor that it ‘accompanies’ or is ‘correlated with’ it.”

And that’s just the neuroscientists. God help the rest of us, who remain captive to so many dead metaphors, who still refer to the soul casually in everyday speech. Nietzsche said it best: we haven’t gotten rid of God because we still believe in grammar.

I told only a few friends about the dog. When I did mention it, people appeared perplexed, or assumed it was some kind of joke. One night I was eating dinner with some friends who live on the other side of town. This couple has five children and a dog of their own, and their house is always full of music and toys and food—all the signs of an abundant life, like some kind of Dickensian Christmas scene. When I mentioned the dog, one of this couple, the father, responded in a way I had come to recognize as typical: he asked about its utility. Was it for security? Surveillance? It was strange, this obsession with functionality. Nobody asks anyone what their dog or cat is “for.”

When I said it was primarily for companionship, he rolled his eyes. “How depressed does someone have to be to seek robot companionship?”

“They’re very popular in Japan,” I replied.

“Of course!” he said. “The world’s most depressing culture.” I asked him what he meant by this.

He shrugged. “It’s a dying culture.” He’d read an article somewhere, he said, about how robots had been proposed as caretakers for the country’s rapidly aging population. He said this somewhat hastily, then promptly changed the subject.

Later it occurred to me that he had actually been alluding to Japan’s low birth rate. There were in fact stories in the popular media about how robot babies had become a craze among childless Japanese couples. He must have faltered in spelling this out after realizing that he was speaking to a woman who was herself childless—and who had become, he seemed to be insinuating, unnaturally attached to a robot in the way childless couples are often prone to fetishizing the emotional lives of their pets. For weeks afterward his comments bothered me. Why did he react so defensively? Clearly the very notion of the dog had provoked in him some kind of primal anxiety about his own human exceptionality.

Japan, it has often been said, is a culture that has never been disenchanted. Shintoism, Buddhism, and Confucianism make no distinction between mind and matter, and so many of the objects deemed inanimate in the West are considered alive in some sense. Japanese seamstresses have long performed funerals for their dull needles, sticking them, when they are no longer usable, into blocks of tofu and setting them afloat on a river. Fishermen once performed a similar ritual for their hooks. Even today, when a long-used object is broken, it is often taken to a temple or a shrine to receive the kuyō, the purification rite given at funerals. In Tokyo one can find stone monuments marking the mass graves of folding fans, eyeglasses, and the broken strings of musical instruments.

Some technology critics have credited the country’s openness to robots to the long shadow of this ontology. If a rock or a paper fan can be alive, then why not a machine? Several years ago, when Sony temporarily discontinued the Aibo and it became impossible for the old models to be repaired, the defunct dogs were taken to a temple and given a Buddhist funeral. The priest who performed the rites told one newspaper, “All things have a bit of soul.”

 

Meghan O’Gieblyn is the author of the essay collection Interior States, which was published to wide acclaim and won the Believer Book Award for Nonfiction. Her writing has received three Pushcart Prizes and appeared in The Best American Essays anthology. She writes essays and features for Harper’s Magazine, The New Yorker, the Guardian, Wired, the New York Times, and elsewhere. She lives with her husband in Madison, Wisconsin.

From God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning, by Meghan O’Gieblyn. Copyright © 2021 by Meghan O’Gieblyn. Published by arrangement with Doubleday, an imprint of The Knopf Doubleday Group, a division of Penguin Random House LLC.