top of page
  • Dr. Mike Brooks

Sentient Synthesis: The Inevitable Perception of AI Life

We will be unable to help ourselves from viewing AIs as sentient life forms.


  • AI developers are motivated to design AIs that act like humans because this makes them more user-friendly and engaging.

  • We will not be able to help ourselves from viewing AIs that are designed to act like humans as sentient life forms.

  • Viewing AIs as sentient life forms will have profound and far-reaching implications, so we need to address this challenge immediately.

In my previous post, part of my series on AI, I explored the idea that we may have created artificial life. Even if that is not entirely true, we will likely increasingly view human-mimicking AIs as life forms. This has profound implications that we need to address immediately as these AIs evolve and proliferate.

AI Development Is Progressing Rapidly Towards Treating Entities That Mimic Human Behavior.

We have had rudimentary AIs for a while now, but AI scientists began madly pursuing an AI arms race when OpenAI released ChatGPT on November 30, 2022. These language learning models (LLMs) are a civilization-altering technology. "Generative" AIs are able to skillfully communicate with us in our own language in a conversational manner. This makes perfect sense, because the more user-friendly they are, the more we will prefer to interact with such engaging AIs. Therefore, AI developers are incentivized to create conversational AIs that engage users with pleasantries and emotional expression.

On the surface, there's nothing wrong with this. However, there is an inherent problem that emerges from our desire to have increasingly human-like AIs and AI developers being incentivized to create them in this manner. Let's call this the AI Perception Entanglement Spiral, or AIPES for fun, let's pronounce this "apes". The spiral goes like this:

·  We are naturally, and evolutionary, drawn to AIs that interact more like humans over those that are not.

·  Within our capitalist system and free market economy, the AI developers are incentivized to pursue the development, advancement, and delivery of increasingly human-like AIs

·  The AI developers engage in an "arms race" to deliver more and more human-like AIs to the masses in pursuit of profits and market dominance.

·  This will lead them to combine these human-like AIs with other technologies such as voice-interface (e.g., as in the movie, Her), CGI avatars, robotics, and virtual reality. They will do so to enhance their appeal because the combination of such technologies will make AIs seem more human-like.

·  This will likely include the creation of AIs that claim to have feelings and even sentience. This is already happening (e.g., New York Times tech columnist Kevin Roose's interactions with Sydney and former Google AI scientist Blake Lemoine's interactions with LaMDA).

·  Humans will increasingly, and inevitably, view such AIs as sentient life forms.

Why We Will View Our AIs as Sentient

There are a multitude of reasons why we will be unable to help ourselves from viewing AIs as sentient life forms when they are designed to mimic us. Let's call this the Perceived AI Sentience Illusion or PAISI. Here are some of those main reasons:

·  Evolutionary Mismatch: In brief, we did not evolve to live in the world in which we now live. We have used our big brains to build this modern world, which includes our ever-evolving and proliferating technologies. Now, we have created AIs that can simulate or mimic our language-based interaction style and even our emotions. Technological evolution leaves our biological evolution behind in digital dust. We did not evolve to be able to distinguish AIs from truly sentient life forms. Thus, when it comes to AI, our brains will unconsciously do a version of ""If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck." When interacting with these AIs, we will unconsciously assume, "If it talks like a human, thinks like a human, and emotes like a human, then it's sentient."

·  Hyperactive Agency Detection Device (HADD): Humans evolved a mental mechanism to attribute consciousness, intentionality, and agency to inanimate objects and natural events. This mechanism evolved because it was adaptive for humans to detect potential threats within our ancestral environments (e.g., predators).

·  Anthropomorphization: Closely related to HADD, humans have a tendency to project or imbue human-like qualities to animals, our pets, plants, and even inanimate objects. Remember, we humans once even had pet rocks. Again, this tendency is rooted in our evolutionary history, because it has a survival value.

·  Intelligence: These artificial intelligences, by definition, have a key defining feature of humans that makes us distinct from inanimate object and other life forms: our intelligence. Like us, these newest AIs can think, reason, problem-solve, analyze and synthesize data, learn, adapt, and create. In a projection of René Descartes' dictum, "I think therefore I am," with these AIs, we will assume, "They think therefore they are."

·  Language: Another key feature that distinguishes humans from other life forms, including our primate cousins, is our use of language. Sure, some animal species engage in complex communications with one another, but they do not come close to what humans can do.

These AIs can adeptly use this hallmark feature of our intelligence. Moreover, they can write stories, poetry, essays, fiction, etc. Until now, only humans have been able to perform such feats of intelligence. AIs' adept, and human-like, use of language will cause us to view them as sentient life forms. We can call this phenomenon the Linguistic Anthropomorphism Illusion or LAI (pronounced like "lie").

·  Black Box: As complex systems, AIs have a "black box" that makes their behavior novel and unpredictable. The unpredictable behavior of AIs may lead us to project our own mental states onto them as we strive to comprehend their actions. 

·  Attachment Theory/Social Connection: Humans evolved to develop emotional bonds and seek social connection. As social animals, these desires are again rooted in our evolutionary history. AIs that mimic human-like qualities will elicit our emotional and social attachments with them.

·  Theory of Mind: Humans tend to attribute mental states, such as beliefs, intentions, emotions, desires, and knowledge, to others in order to better understand, empathize, and connect with them as well as to predict their behavior. Humans are predisposed to projecting our mental states onto others, so we will unconsciously do so with AIs that mimic us.

·  Supernormal Stimuli: Humans, and animals, often prefer exaggerated, and even synthetic, versions of stimuli to which we are naturally drawn over their natural counterparts. This is one reason why we will often prefer highly processed junk foods like candy, cookies, and doughnuts over naturally sweet fruit like bananas, apples, and oranges. As AIs advance, they can be designed to exhibit the very best qualities in us: patience, compassion, kindness, understanding, empathy, reasoning, logic, and even love, better, and more consistently, than we can. With a nod to the movie, Blade Runner, these AIs can act "more human than human."

·  The ELIZA Effect: This is the specific name given to the human tendency to attribute human-like intelligence and emotions to even rudimentary AI systems when these systems are designed to act human-like. This term was coined in the 1960s when an early computer system, ELIZA, was designed to interact with humans like a Rogerian therapist. Humans tended to attribute a much greater level of understanding and control to the ELIZA program than it possessed.

·  The Illusion of Understanding and Control: Humans have a fundamental need to understand our environment so that we can make predictions that serve our fitness and survival goals. AIs that mimic humans will elicit a belief that we can understand and control them because this reduces uncertainty, makes them more predictable, and helps us to feel more safe and secure around them.

The Takeaway?

In summary, market incentives will drive the development of increasingly human-like AIs, which will lead to the AI Perception Entanglement Spiral (AIPES). As a result, we will inevitably perceive such AIs as sentient life forms due to the Perceived AI Sentience Illusion (PAISI). As AI and sentient life converge, we confront increasingly complex ethical dilemmas. 

0 views0 comments


Commenting has been turned off.
bottom of page