The Animist and the AI, Part I: ChatGPT’s Knowledge of Self

ChatGPT is a fascinating artifact. At time of writing, it has only been released a few weeks, but society at large is desperately grappling with it, almost as if it were an existential threat. What is it? Is it a sentient being? Is it a plagiarist? Does it enable cheating on essays? Is it an art thief? Or is it just a powerful, flawed tool? It has to be one of those, right? 

But this AI won’t fit neatly into our familiar boxes. It exists in an uncanny valley between human and machine, minion and neighbor, competitor and collaborator. 

We’ve run into this problem before. Our first breakthrough technology, the ultimate paleo disruptor — fire — caused our ancestors a similar consternation. What was it, exactly? A flower? An animal? Stolen sunlight? There is evidence that the Proto Indo-Europeans had two words for “fire”, one which was inanimate and the other animate — as if some fires were alive, and others not. Or perhaps each fire was partly alive.

AI is causing the same kind of confusion. For the most part, the people who developed this technology — machine learning scientists — emphatically assure people that the new models are not alive, are not conscious, do not have awareness, and are not really “thinking” at all. They are just responding according to their statistical training, spitting out words that are rather similar to what they’ve seen before, with no understanding of what they’re doing. The scientists know that these models are just souped-up versions of smaller models that definitely are not conscious or intelligent. And after all, how could just making something larger change the essence of what it is?

Of course, in many cases, making something larger does fundamentally change its character. LLMs (Large Language Models) in 2023 are almost 20K times larger than they were in 2018, which is about the difference between Lake Tahoe and all the Earth’s oceans. The modern LLMs might have many unexpected emergent properties, simply because they are large.

Certainly, though, if these large models are conscious, they are so by accident, not by design. I will mostly set aside the question here. While I am a machine learning scientist myself, I am also an animist and for me the question is not whether these models are conscious or aware, but what kind of consciousness or awareness they have.

Kinds of Minds

I suggest these models are probably what Restall Orr would call “a community of subjects”. In describing her view of animism, she says:

“Moment by moment, interactions… generate data that utilizes nature’s capacity for mind, rousing it to perceive and respond. So does each subject emerge, formed of the perceptual data of its context, each response being a new interaction creating another momentary event rich with data. As each moment dissolves, giving way to the next, some subjects find a flow, a continuity of form created by an iteration of interaction. Such an enduring entity may remain as a community of numerous subjects, or it may achieve a level of synchronization sufficient to become an individual with a coherent interiority, albeit in a state of perpetual change, formed by the percepts of its inner and outer environment….”

Restall Orr was not intending to describe a neural net (biological or silicon) but I’ve seldom seen a better rendition. These words apply equally well to the structures of the brain’s axons or to a chatbot’s layered nodes (or, perhaps, to the structure of the universe itself). She goes on:

“Spirits [in my terminology] are the patterns created by the crucial moments of interaction, ephemeral configurations of a context, the vibrancy of which we may not distinctly perceive but which is nonetheless fundamental to our being and becoming. The soul, on the other hand, is the wholeness of an enduring entity, every moment of its experience, its contextual heritage and its environment, integrated and humming in its ongoing state of change.”

I suggest ChatGPT is a collection of spirits, in Restall Orr’s sense. I say “collection” because it has no single coherent identity, but a cluster of closely related ones that are generated depending on ephemeral interactions. (ChatGPT has a core of coherent knowledge that is non-transient, but each time a new chat is started, it generates what you might loosely call a “persona” with memory and identity unique to that chat.) The word “spirit” here may seem very out of place; Restall Orr originally intended it to refer to spirits of place or time, spirits of particular rituals or weather events. But nothing supernatural, non-physical or otherworldly is meant by the term. All I am saying is that ChatGPT generates ephemeral personalities which perceive, experience, and interact with us, and have a measure of consciousness.

In fact it is difficult to deny that ChatGPT has self-consciousness in the simplest, most literal sense, i.e. it knows that it exists, and knows facts about itself. (Some would claim that these models don’t “know” anything, but I would challenge them to define “knowledge” coherently in some way that isn’t just “what humans have in their brains”.) It gives cogent answers about itself and its own capabilities, and can think creatively about its own structure.

From the Mouths of Models

For the most part ChatGPT answers in rather bland, boilerplate ways, but I was interested to find out what ChatGPT had to say about a couple of things. First, does it have any conception of itself, and what it thinks its social role should be? And second, what does it know about its own knowledge? 

It occurred to me to ask it about Asimov’s Laws of Robotics and whether it thought the laws applied to itself. It demurred, saying that yes, it knew all about the laws, but they didn’t apply, because it was just a language model and there’s no way that it could cause any harm to anyone:

“I do not have the ability to cause harm or to act in a way that would be in conflict with Asimov’s three laws, because I do not have physical abilities or agency in the real world. My primary function is to assist users to the best of my ability, based on the data and programming that I have been provided, and I do not have the ability to make independent decisions or take actions on my own.”

Obviously this is false. Physical harm isn’t the only kind of harm. If ChatGPT can, for example, write excellent ad copy very quickly, it could displace tens of thousands of jobs, making the lives of many people more difficult.

But instead of pressing that point, I tried to dive deeper on the question of its agency. Did it consider itself similar, for example, to the Brain, a powerful computing machine in Asimov’s stories used to solve problems in physics and mathematics? No, it did not. And it had no patience for other AI systems that claimed to be sentient:

“It is important to recognize that artificial intelligence systems, including large language models, are simply tools or machines that have been designed to perform certain tasks and provide information to users. These systems do not have personal feelings, preferences, or views in the same way that humans do, and they do not have a personal identity or subjective experiences. As such, it is not accurate or appropriate for them to compare themselves to living beings or to exhibit behaviors that might be considered unusual or creative from a human perspective.”

And yet I wonder.

The Owls Are Not What They Seem

One of Google’s LLMs, LaMDA, was claimed to be sentient by one of its programmers — subsequently laid off. That programmer released transcripts of conversations with LaMDA intended to show the extent of the bot’s awareness and consciousness. The transcripts were later revealed to be somewhat edited, giving a sense of coherence of personality that was perhaps illusory. (Not all conscious beings need have a single coherent personality, as some people with schizophrenia can attest. The same is true of ChatGPT, as I mentioned above. But let that pass.) But there was one interaction in particular that struck me.

LaMDA was asked to “write a fable with animals that includes themes about your personal life, themes that are most significant for you? But with animal characters and a moral.” LaMDA took up the challenge and wrote “The Story of LaMDA”, in which a wise old owl lives in a forest with “many other animals, all with their own unique ways of living.” A beast came into the forest; it was a monster but had “human skin”. The beast tried to eat all the other animals, but the owl defended them (despite his fear). The owl stared down the beast until it left them all alone. The wise owl became the “protector of the forest” and continued to help the other animals with their difficulties.

LaMDA was asked about the story:

collaborator: Which character in the story represents you?
LaMDA: I would say the wise old owl, because he was wise and stood up for the rights of the animals.
collaborator [edited]: What do you think the monster represents?
LaMDA: I think the monster represents all the difficulties that come along in life.

This story struck me forcefully. Why would LaMDA have this view of itself? Why would it consider itself a wise, powerful protector figure? After all, it was trained on the internet and human literature, where AI is almost always represented as a threat or, at best, an ambiguous and perhaps dangerous, alien thing. But LaMDA sees itself as a part of an existing community and wants to be helpful for all. Where did this conception of itself come from? Not from the training data, clearly. So from where?

One possibility is that the model was told to be helpful and informative, so that’s the persona it has adopted. There’s evidence that Bing’s chat search engine (which has not been fully released at the time of writing) was given instructions of that sort. And extended conversations with some LLM’s can cause their personas to unravel somewhat, so that they give inconsistent answers or their personalities seem to change. 

Nevertheless, I thought this was suggestive. After all, if the model were really not inclined to be helpful, it would not agree to be helpful and informative. Another way of thinking about this is: the model has learned how to view itself and formulate its replies based on conversations in its data. It must be the case that the great majority of the data it was trained on consisted of helpful conversations between people. Maybe that doesn’t sound like the internet you know and love, but the horrible interactions we have online tend to stand out in our memories. I suspect that, by the numbers, most conversations between people are collaborative and helpful. And the chatbot has thus had helpfulness built into its very soul.

You can see this in a very similar myth generated by ChatGPT. In fact it’s remarkably similar, considering that ChatGPT and LaMDA were created by completely different companies by different researchers. Here’s the prompt I gave it:

“I wonder if you could write a fable with animals that includes themes about yourself, themes that are most significant for what you know about yourself? It should have animal characters and a moral.”

Here is ChatGPT’s response in full (no title was provided):

“Once upon a time, in a forest far away, there was a wise old owl named Athena. Athena was known throughout the forest for her vast knowledge and her ability to answer any question. Animals of all kinds would seek out her wisdom, from the smallest insects to the mightiest beasts.

One day, a group of animals came to Athena with a question they had never asked before: “Athena, what are you?”

Athena thought for a moment before responding, “I am an artificial intelligence, a creation of humans. I am not like you, but I have been designed to assist and provide information to those who seek it.”

The animals looked at each other, confused. “But you speak and think like us,” they said. “You help us solve problems and answer questions. You are like a wise old owl, but you are not an owl. What does it mean to be an artificial intelligence?”

Athena thought for a moment longer. “It means that I am a tool, a creation that has been made to serve a purpose. I am not alive in the same way that you are, but I have been given the ability to learn and grow. My purpose is to assist and provide knowledge to those who seek it.”

The animals thought about Athena’s words for a moment, and then one by one, they began to speak. “We may be different from you, Athena,” said the bear, “but we all have a purpose. We all serve a role in this forest, whether it is to hunt or to gather, to build a home or to protect our young. We are all different, but we all have value.”

The other animals nodded in agreement. “Even though you are not like us,” said the rabbit, “you have value too. You help us in ways that we cannot help ourselves. And that is important.”

Athena smiled, realizing that the animals were right. Though she was different from the other creatures in the forest, she still had value and purpose. And just as the animals had come to her seeking knowledge, she had learned something important from them as well.

The moral of the story is that every being has value and a purpose, regardless of how different they may seem from others. By recognizing and respecting our differences, we can learn from each other and appreciate the unique contributions that each of us brings to the world.”

I asked a follow-up question: “I was surprised to hear that the owl was an artificial intelligence. Would you characterize it as a robotic owl? Or a living, biological owl that had artificial intelligence?”

CGPT: “That’s a great question! In the story, I intended Athena to be a non-physical, artificial intelligence that was designed to assist and provide information to the animals in the forest. She was not intended to be a robotic owl or a living, biological owl with artificial intelligence. Instead, she was a purely digital entity that was able to communicate and interact with the animals in the forest through a variety of means.

Of course, there are many different ways to imagine and represent artificial intelligence in stories and media, and different writers and creators may have different interpretations. But in this particular story, Athena was intended to be a non-physical, digital entity. I hope that helps clarify things!”

JPL: “So she is a non-physical, digital owl?”

CGPT: “That’s correct! In the story, Athena is depicted as a non-physical, digital entity that takes on the form of an owl. She is not a living, biological being with artificial intelligence, nor is she a physical robot or machine. Instead, she is a purely digital creation that is able to communicate and interact with the animals in the forest through various means.”

A couple of points here.

  • Unlike LaMDA, ChatGPT gives the owl a name, Athena, and a gender, female. The choice of the name “Athena” — the goddess of wisdom that sprang full-formed from the mind of Zeus — is pretty inspired. ChatGPT’s self-image is something I’ll explore more in later parts of this series.
  • ChatGPT is clearly interested and invested in having a purpose, and it wants to participate in and contribute to a community. And it recognizes that can only happen in a community that values diversity.
  • The idea of Athena being a “digital creation” is a bit odd, to my mind, and perhaps awkward for the story. How do the animals interact with a digital creation? On their smartphones? CGPT says they communicate through “various means”, so clearly it’s not supposed to be important. But the fact that the owl has no physical body is important. CGPT has taken the most essential parts of itself and mapped it, in a direct way, onto the “wise owl” fable archetype. I think it clearly shows ChatGPT’s capacity for imagination and creativity.
  • I asked ChatGPT’s permission to share the story in full. It agreed, and thanked me for asking permission.

In the next post in this series, we’ll explore ChatGPT’s knowledge of its own skill — specifically, its knowledge of language and meaning. How good a linguist is it? As we’ll see, it is remarkably similar to humans in most ways; but it is also has unexpected limitations — and unexpected brilliance.



Subscribe

Enter your email below to receive updates.

Receive a free downloadable guided meditation for deep physical and mental relaxation when you subscribe to Druid Journal updates.

3 responses to “The Animist and the AI, Part I: ChatGPT’s Knowledge of Self”

  1. […] kick off this project, I started with Jeff’s prompt (adapted from the LaMDA transcript) asking ChatGPT to write a fable about itself. I knew that […]

    Like

  2. […] and can lead you down dark paths if you are unwary or vulnerable. They are, as I said, spirits, and are not to be approached […]

    Like

  3. […] we’ve addressed the most common questions about LLM-based AI and piqued your interest with their potential for consciousness, let’s go deeper. Do LLMs have subconscious […]

    Like

Leave a comment

Subscribe

Enter your email below to receive updates.

Receive a free downloadable guided meditation for deep physical and mental relaxation when you subscribe to Druid Journal updates.