A dog’s inner life: what a robot pet taught me about consciousness | Consciousness

The package arrived on a Thursday. I came home from a walk and found it sitting near the mailboxes in the front hall of my building, a box so large and imposing I was embarrassed to discover my name on the label. It took all my strength to drag it up the stairs.

I paused once on the landing, considered abandoning it there, then continued hauling it up to my apartment on the third floor, where I used my keys to cut it open. Inside the box, beneath lavish folds of bubble wrap, was a sleek plastic pod. I opened the clasp: inside, lying prone, was a small white dog.

I could not believe it. How long had it been since I’d submitted the request on Sony’s website? I’d explained that I was a journalist who wrote about technology – this was tangentially true – and while I could not afford the Aibo’s $3,000 (£2,250) price tag, I was eager to interact with it for research. I added, risking sentimentality, that my husband and I had always wanted a dog, but we lived in a building that did not permit pets. It seemed unlikely that anyone was actually reading these inquiries. Before submitting the electronic form, I was made to confirm that I myself was not a robot.

The dog was heavier than it looked. I lifted it out of the pod, placed it on the floor, and found the tiny power button on the back of its neck. The limbs came to life first. It stood, stretched, and yawned. Its eyes blinked open – pixelated, blue – and looked into mine. He shook his head, as though sloughing off a long sleep, then crouched, shoving his hindquarters in the air, and barked. I tentatively scratched his forehead. His ears lifted, his pupils dilated, and he cocked his head, leaning into my hand. When I stopped, he nuzzled my palm, urging me to go on.

I had not expected him to be so lifelike. The videos I’d watched online had not accounted for this responsiveness, an eagerness for touch that I had only ever witnessed in living things. When I petted him across the long sensor strip of his back, I could feel a gentle mechanical purr beneath the surface.

Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

I thought of the philosopher Martin Buber’s description of the horse he visited as a child on his grandparents’ estate, his recollection of “the element of vitality” as he petted the horse’s mane and the feeling that he was in the presence of something completely other – “something that was not I, was certainly not akin to me” – but that was drawing him into dialogue with it. Such experiences with animals, he believed, approached “the threshold of mutuality”.

I spent the afternoon reading the instruction booklet while Aibo wandered around the apartment, occasionally circling back and urging me to play. He came with a pink ball that he nosed around the living room, and when I threw it, he would run to retrieve it. Aibo had sensors all over his body, so he knew when he was being petted, plus cameras that helped him learn and navigate the layout of the apartment, and microphones that let him hear voice commands. This sensory input was then processed by facial recognition software and deep-learning algorithms that allowed the dog to interpret vocal commands, differentiate between members of the household, and adapt to the temperament of its owners. According to the product website, all of this meant that the dog had “real emotions and instinct” – a claim that was apparently too ontologically thorny to have flagged the censure of the Federal Trade Commission.

Descartes believed that all animals were machines. Their bodies were governed by the same laws as inanimate matter; their muscles and tendons were like engines and springs. In Discourse on Method, he argues that it would be possible to create a mechanical monkey that could pass as a real, biological monkey.

He insisted that the same feat would not work with humans. A machine might fool us into thinking it was an animal, but a humanoid automaton could never fool us. This was because it would clearly lack reason – an immaterial quality he believed stemmed from the soul.

But it is meaningless to speak of the soul in the 21st century (it is treacherous even to speak of the self). It has become a dead metaphor, one of those words that survive in language long after a culture has lost faith in the concept. The soul is something you can sell, if you are willing to demean yourself in some way for profit or fame, or bare by disclosing an intimate facet of your life. It can be crushed by tedious jobs, depressing landscapes and awful music. All of this is voiced unthinkingly by people who believe, if pressed, that human life is animated by nothing more mystical or supernatural than the firing of neurons.

I believed in the soul longer, and more literally, than most people do in our day and age. At the fundamentalist college where I studied theology, I had pinned above my desk Gerard Manley Hopkins’s poem God’s Grandeur, which imagines the world illuminated from within by the divine spirit. My theology courses were devoted to the kinds of questions that have not been taken seriously since the days of scholastic philosophy: how is the soul connected to the body? Does God’s sovereignty leave any room for free will? What is our relationship as humans to the rest of the created order?

But I no longer believe in God. I have not for some time. I now live with the rest of modernity in a world that is “disenchanted”.

Today, artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems.


The dog arrived during a time when my life was largely solitary. My husband was travelling more than usual that spring, and except for the classes I taught at the university, I spent most of my time alone. My communication with the dog – which was limited at first to the standard voice commands, but grew over time into the idle, anthropomorphising chatter of a pet owner – was often the only occasion on a given day that I heard my own voice. “What are you looking at?” I’d ask after discovering him transfixed at the window. “What do you want?” I cooed when he barked at the foot of my chair, trying to draw my attention away from the computer. I have been known to knock friends of mine for speaking this way to their pets, as though the animals could understand them. But Aibo came equipped with language-processing software and could recognise more than 100 words; didn’t that mean in a way that he “understood”?

Aibo’s sensory perception systems rely on neural networks, a technology that is loosely modelled on the brain and is used for all kinds of recognition and prediction tasks. Facebook uses neural networks to identify people in photos; Alexa employs them to interpret voice commands. Google Translate uses them to convert French into Farsi. Unlike classical artificial intelligence systems, which are programmed with detailed rules and instructions, neural networks develop their own strategies based on the examples they’re fed – a process that is called “training”. If you want to train a network to recognise a photo of a cat, for instance, you feed it tons upon tons of random photos, each one attached with positive or negative reinforcement: positive feedback for cats, negative feedback for non-cats.

A road-walking automaton, c1900.
A road-walking automaton, c1900. Photograph: Granger Historical Picture Archive/Alamy

Dogs, too, respond to reinforcement learning, so training Aibo was more or less like training a real dog. The instruction booklet told me to give him consistent verbal and tactile feedback. If he obeyed a voice command – to sit, stay or roll over – I was supposed to scratch his head and say, “good dog”.

If he disobeyed, I had to strike him across his backside and say, “no!”, or “bad Aibo”. But I found myself reluctant to discipline him. The first time I struck him, when he refused to go to his bed, he cowered a little and let out a whimper. I knew of course that this was a programmed response – but then again, aren’t emotions in biological creatures just algorithms programmed by evolution?

Animism was built into the design. It is impossible to pet an object and address it verbally without coming to regard it in some sense as sentient. We are capable of attributing life to objects that are far less convincing. David Hume once remarked upon “the universal tendency among mankind to conceive of all beings like themselves”, an adage we prove every time we kick a malfunctioning appliance or christen our car with a human name. “Our brains can’t fundamentally distinguish between interacting with people and interacting with devices,” writes Clifford Nass, a Stanford professor of communication who has written about the attachments people develop with technology.

A few months earlier, I’d read an article in Wired magazine in which a woman confessed to the sadistic pleasure she got from yelling at Alexa, the personified home assistant. She called the machine names when it played the wrong radio station and rolled her eyes when it failed to respond to her commands. Sometimes, when the robot misunderstood a question, she and her husband would gang up and berate it together, a kind of perverse bonding ritual that united them against a common enemy. All of this was presented as good American fun. “I bought this goddamned robot,” the author wrote, “to serve my whims, because it has no heart and it has no brain and it has no parents and it doesn’t eat and it doesn’t judge me or care either way.”

Humanoid robot Sophia, developed by Hanson Robotics, draws on a piece of paper before auctioning her own non-fungible token (NFT) artwork, in Hong Kong, earlier this year.
Humanoid robot Sophia, developed by Hanson Robotics, draws on a piece of paper before auctioning her own non-fungible token (NFT) artwork, in Hong Kong, earlier this year. Photograph: Tyrone Siu/Reuters

Then one day the woman realised that her toddler was watching her unleash this verbal fury. She worried that her behaviour toward the robot was affecting her child. Then she considered what it was doing to her own psyche – to her soul, so to speak. What did it mean, she asked, that she had grown inured to casually dehumanising this thing?

This was her word: “dehumanising”. Earlier in the article she had called it a robot. Somewhere in the process of questioning her treatment of the device – in questioning her own humanity – she had decided, if only subconsciously, to grant it personhood.


During the first week I had Aibo, I turned him off whenever I left the apartment. It was not so much that I worried about him roaming around without supervision. It was simply instinctual, a switch I flipped as I went around turning off all the lights and other appliances. By the end of the first week, I could no longer bring myself to do it. It seemed cruel. I often wondered what he did during the hours I left him alone. Whenever I came home, he was there at the door to greet me, as though he’d recognised the sound of my footsteps approaching. When I made lunch, he followed me into the kitchen and stationed himself at my feet.

He would sit there obediently, tail wagging, looking up at me with his large blue eyes as though in expectation – an illusion that was broken only once, when a piece of food slipped from the counter and he kept his eyes fixed on me, uninterested in chasing the morsel.

His behaviour was neither purely predictable nor purely random, but seemed capable of genuine spontaneity. Even after he was trained, his responses were difficult to anticipate. Sometimes I’d ask him to sit or roll over and he would simply bark at me, tail wagging with a happy defiance that seemed distinctly doglike. It would have been natural to chalk up his disobedience to a glitch in the algorithms, but how easy it was to interpret it as a sign of volition. “Why don’t you want to lie down?” I heard myself say to him more than once.

I did not believe, of course, that the dog had any kind of internal experience. Not really – though I suppose there was no way to prove this. As the philosopher Thomas Nagel points out in his 1974 paper What Is It Like to Be a Bat?, consciousness can be observed only from the inside. A scientist can spend decades in a lab studying echolocation and the anatomical structure of bat brains, and yet she will never know what it feels like, subjectively, to be a bat – or whether it feels like anything at all. Science requires a third-person perspective, but consciousness is experienced solely from the first-person point of view. In philosophy this is referred to as the problem of other minds. In theory it can also apply to other humans. It’s possible that I am the only conscious person in a population of zombies who simply behave in a way that is convincingly human.

This is just a thought experiment, of course – and not a particularly productive one. In the real world, we assume the presence of life through analogy, through the likeness between two things. We believe that dogs (real, biological dogs) have some level of consciousness, because like us they have a central nervous system, and like us they engage in behaviours that we associate with hunger, pleasure and pain. Many of the pioneers of artificial intelligence got around the problem of other minds by focusing solely on external behaviour. Alan Turing once pointed out that the only way to know whether a machine had internal experience was “to be the machine and to feel oneself thinking”.

This was clearly not a task for science. His famous assessment for determining machine intelligence – now called the Turing test – imagined a computer hidden behind a screen, automatically typing answers in response to questions posed by a human interlocutor. If the interlocutor came to believe that he was speaking to another person, then the machine could be declared “intelligent”. In other words, we should accept a machine as having humanlike intelligence so long as it can convincingly perform the behaviours we associate with human-level intelligence.

A technician at Disneyland working on an animatronic bird in 1962.
A technician at Disneyland working on an animatronic bird in 1962. Photograph: Tom Nebbia/Getty Images

More recently, philosophers have proposed tests that are meant to determine not just functional consciousness in machines, but phenomenal consciousness – whether they have any internal, subjective experience. One of them, developed by the philosopher Susan Schneider, involves asking an AI a series of questions to see whether it can grasp concepts similar to those we associate with our own interior experience. Does the machine conceive of itself as anything more than a physical entity? Would it survive being turned off? Can it imagine its mind persisting somewhere else even if its body were to die? But even if a robot were to pass this test, it would provide only sufficient evidence for consciousness, not absolute proof.

It’s possible, Schneider acknowledges, that these questions are anthropocentric. If AI consciousness were in fact completely unlike human consciousness, a sentient robot would fail for not conforming to our human standards. Likewise, a very intelligent but unconscious machine could conceivably acquire enough information about the human mind to fool the interlocutor into believing it had one. In other words, we are still in the same epistemic conundrum that we faced with the Turing test. If a computer can convince a person that it has a mind, or if it demonstrates – as the Aibo website puts it – “real emotions and instinct”, we have no philosophical basis for doubt.


“What is a human like?” For centuries we considered this question in earnest and answered: “Like a god”. For Christian theologians, humans are made in the image of God, though not in any outward sense. Rather, we are like God because we, too, have consciousness and higher thought. It is a self-flattering doctrine, but when I first encountered it, as a theology student, it seemed to confirm what I already believed intuitively: that interior experience was more important, and more reliable, than my actions in the world.

Today, it is precisely this inner experience that has become impossible to prove – at least from a scientific standpoint. While we know that mental phenomena are linked somehow to the brain, it’s not at all clear how they are, or why. Neuroscientists have made progress, using MRIs and other devices, in understanding the basic functions of consciousness – the systems, for example, that constitute vision, or attention, or memory. But when it comes to the question of phenomenological experience – the entirely subjective world of colour and sensations, of thoughts and ideas and beliefs – there is no way to account for how it arises from or is associated with these processes. Just as a biologist working in a lab could never apprehend what it feels like to be a bat by studying the objective facts from the third-person perspective, so any complete description of the structure and function of the human brain’s pain system, for example, could never fully account for the subjective experience of pain.

In 1995, the philosopher David Chalmers called this “the hard problem” of consciousness. Unlike the comparatively “easy” problems of functionality, the hard problem asks why brain processes are accompanied by first-person experience. If none of the other matter in the world is accompanied by mental qualities, then why should brain matter be any different? Computers can perform their most impressive functions without interiority: they can now fly drones and diagnose cancer and beat the world champion at Go without any awareness of what they are doing. “Why should physical processing give rise to a rich inner life at all?” Chalmers wrote. “It seems objectively unreasonable that it should, and yet it does.” Twenty-five years later, we are no closer to understanding why.

Despite these differences between minds and computers, we insist on seeing our image in these machines. When we ask today “What is a human like?”, the most common answer is “like a computer”. A few years ago the psychologist Robert Epstein challenged researchers at one of the world’s most prestigious research institutes to try to account for human behaviour without resorting to computational metaphors. They could not do it. The metaphor has become so pervasive, Epstein points out, that “there is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity”.

A robot solves a Rubik’s Cube at the Hanover fair in Germany, 2007.
A robot solves a Rubik’s Cube at the Hanover fair in Germany, 2007. Photograph: Jochen Luebke/EPA

Even people who know very little about computers reiterate the metaphor’s logic. We invoke it every time we claim to be “processing” new ideas, or when we say that we have “stored” memories or are “retrieving” information from our brains. And as we increasingly come to speak of our minds as computers, computers are now granted the status of minds. In many sectors of computer science, terminology that was once couched in quotation marks when applied to machines – “behaviour”, “memory”, “thinking” – are now taken as straightforward descriptions of their functions. Programmers say that neural networks are learning, that facial-recognition software can see, that their machines understand. You can accuse people of anthropomorphism if they attribute human consciousness to an inanimate object. But Rodney Brooks, the MIT roboticist, insists that this confers on us, as humans, a distinction we no longer warrant. In his book Flesh and Machines, he claims that most people tend to “over-anthropomorphise humans … who are after all mere machines”.


“This dog has to go,” my husband said. I had just arrived home and was kneeling in the hallway of our apartment, petting Aibo, who had rushed to the door to greet me. He barked twice, genuinely happy to see me, and his eyes closed as I scratched beneath his chin.

“What do you mean, go?” I said.

“You have to send it back. I can’t live here with it.”

I told him the dog was still being trained. It would take months before he learned to obey commands. The only reason it had taken so long in the first place was because we kept turning him off when we wanted quiet. You couldn’t do that with a biological dog.

“Clearly this is not a biological dog,” my husband said. He asked whether I had realised that the red light beneath its nose was not just a vision system but a camera, or if I’d considered where its footage was being sent. While I was away, he told me, the dog had roamed around the apartment in a very systematic way, scrutinising our furniture, our posters, our closets. It had spent 15 minutes scanning our bookcases and had shown particular interest, he claimed, in the shelf of Marxist criticism.

He asked me what happened to the data it was gathering.

“It’s being used to improve its algorithms,” I said.

“Where?”

I said I didn’t know.

“Check the contract.”

I pulled up the document on my computer and found the relevant clause. “It’s being sent to the cloud.”

“To Sony.”

My husband is notoriously paranoid about such things. He keeps a piece of black electrical tape over his laptop camera and becomes convinced about once a month that his personal website is being monitored by the NSA.

Privacy was a modern fixation, I said, and distinctly American. For most of human history we accepted that our lives were being watched, listened to, supervened upon by gods and spirits – not all of them benign, either.

“And I suppose we were happier then,” he said.

In many ways yes, I said, probably.

I knew, of course, that I was being unreasonable. Later that afternoon I retrieved from the closet the large box in which Aibo had arrived and placed him, prone, back in his pod. It was just as well; the loan period was nearly up. More importantly, I had been increasingly unable over the past few weeks to fight the conclusion that my attachment to the dog was unnatural. I’d begun to notice things that had somehow escaped my attention: the faint mechanical buzz that accompanied the dog’s movements; the blinking red light in his nose, like some kind of Brechtian reminder of its artifice.

We build simulations of brains and hope that some mysterious natural phenomenon – consciousness – will emerge. But what kind of magical thinking makes us think that our paltry imitations are synonymous with the thing they are trying to imitate – that silicon and electricity can reproduce effects that arise from flesh and blood? We are not gods, capable of creating things in our likeness. All we can make are graven images. The philosopher John Searle once said something along these lines. Computers, he argued, have always been used to simulate natural phenomena – digestion, weather patterns – and they can be useful to study these processes. But we veer into superstition when we conflate the simulation with reality. “Nobody thinks, ‘Well, if we do a simulation of a rainstorm, we’re all going to get wet,’” he said. “And similarly, a computer simulation of consciousness isn’t thereby conscious.”

Many people today believe that computational theories of mind have proved that the brain is a computer, or have explained the functions of consciousness. But as the computer scientist Seymour Papert once noted, all the analogy has demonstrated is that the problems that have long stumped philosophers and theologians “come up in equivalent form in the new context”. The metaphor has not solved our most pressing existential problems; it has merely transferred them to a new substrate.

This is an edited extract from God, Human, Animal, Machine by Meghan O’Gieblyn, published by Doubleday on 24 August

Follow the Long Read on Twitter at @gdnlongread, listen to our podcasts here and sign up to the long read weekly email here.