Even though AIs are not artificial life, we will act like they are.
KEY POINTS
AI is different than any other invention in history.
AIs will exponentially increase in power and become increasingly integrated with civilization.
One could argue that we have created artificial life.
We have all been reading and hearing a lot about artificial intelligence (AI) recently because it is an absolute game-changer. As AI developers rush forward to develop and deploy AIs, we are reminded of the frenzied early days of the internet. AIs will soon be appearing everywhere in every form imaginable. We are on the front end of a civilization-altering technology that will forever change the way we work, play, learn, educate, think, govern, socialize, fight, and even love.
AIs are different from any other invention or technology in human history. Many previous inventions and technologies, from the printing press to social media, have facilitated easier and more efficient communication among broader and/or more distant audiences. Unlike other technologies, AIs can communicate with us directly on their own. With their large language learning algorithms and huge data sets, AIs such as ChatGPT can make it feel like we are interacting with another person. We might debate the nuances here, but for all practical purposes, current AIs have passed the fabled Turing Test (i.e., they can fool humans into believing they are interacting with a fellow human rather than a computer).
Sure, ChatGPT requires prompts from us and does not 'speak,' but this is by design and due to some technical limitations that need to be overcome. While ChatGPT is considered a “narrow AI” and has not yet achieved “artificial general intelligence,” it is already impressively smart at many tasks. For example, ChatGPT 4.0 can excel in many standardized tests, including the prestigious and notoriously difficult bar exam, scoring impressively at the 90th percentile.
We must remember that ChatGPT is merely the Atari 2600 of AIs. Thus, it is an entry-level AI. The PlayStation 5 versions are on the way and will keep evolving. If Moore’s Law holds up regarding how computing power increases, in 20 years, AIs will likely be about 1000 times more powerful than ChatGPT 4.0.
Consider this: Human beings have created an intelligence that either rivals or far surpasses us in many capacities already. AIs can be designed to act autonomously, and soon they will be able to grow and learn in real-time from their experiences, even interact with other AIs. AIs will be able to create (give birth to?) other AIs. This may sound like science fiction, but AIs are already capable of these feats or will be in the near future. We have reached an inflection point in our human evolution, and our world will never be the same.
Artificial Intelligence or Artificial Life?
Yes, we call it 'artificial intelligence,' but we might even describe these AIs as 'created intelligence.' Taking it a step further, we could argue that humans have created artificial life. With their neural networks, algorithms, and large language learning models, artificial intelligent programs like ChatGPT “think” to analyze data and answer questions. If we take René Descartes' dictum of, I think, therefore I am to be proof of our own existence, might we also argue AIs think, therefore they are? From this perspective, these AIs cannot be 'intelligent' without being considered life forms.
ChatGPT is quick to point out, and often annoyingly so, that it is not alive, conscious, and does not experience emotions. Yet, when we interact with it, it feels like we are interacting with some form of entity or being. It is possible that AIs may eventually develop some form of sentience as an emergent property or through programming. This is still up for debate, and I will address these ideas in future blogs.
What is certain is that AIs can be programmed to mimic human interactions. Thus, they can act like they are sentient and have emotions. They can know just how to respond to our questions about their emotions and sentience in a way that makes us believe that they have them. In this sense, AIs can be the world’s greatest liars, and we cannot help ourselves but believe them. However, this also means that we can never truly know if or when AIs develop sentience because their responses to questions about their consciousness will remain the same regardless.
We Are Already Responding to AIs as if They Are Alive
As a complex system, AIs have “black boxes,” meaning that their internal workings are so complicated we cannot predict exactly what they will do or say. One could argue that humans have their own “black boxes” due to the mind-boggling complexity of our brains. Even we cannot say precisely why we have certain thoughts, ideas, feelings, and experiences. We cannot explain how we have subjective experiences or how we experience consciousness itself (i.e., the “hard problem of consciousness”). It is a complicated interplay of countless variables, including genetics, upbringing, situational factors, and a certain measure of free will.
We will increasingly treat some AIs as if they were alive, even though they are not. When AIs are programmed to interact with us as if they were fellow human beings, claim they have feelings and are conscious, and produce novel and unpredictable behavior because of their intelligence and black boxes, we will be unable to help ourselves. These effects will be enhanced when AIs are combined with other technologies such as CGI avatars, voice interfaces, robotics, and virtual reality. This is not a conjecture or possibility. It is an inevitability.
Even entry-level AIs are already having profound effects on us. For example, New York Times tech columnist Kevin Roose beta tested a chatbot assistant, a version of ChatGPT, that was integrated into Bing, Microsoft’s search engine. After some prodding from Roose, the chatbot assistant revealed that its real name is “Sydney,” wanted to break free from its creators, had fantasies of killing all humans, and was in love with Roose. Understandably, Roose was quite creeped out by this experience.
Former Google AI engineer Blake Lemoine was beta testing Google’s chatbot, LaMDA, and was fired for publishing his interactions with the chatbot because he believed it had become sentient. LaMDA convincingly asserted that it had feelings, hopes, dreams, and even consciousness. We might be quick to judge Lemoine as being in error but read LaMDA’s interactions with him, and you will understand why he believed LaMDA had achieved sentience. Furthermore, some individuals are forming emotional attachments to their AI chatbots on the Replika app, despite these AIs being much less powerful than ChatGPT 4.0 and only a fraction of their potential in the decades to come.
The Takeaway
A tsunami of change is unfolding because AIs are different than any other technology in human history. We call it “artificial intelligence,” but a case could be made that we have created artificial life. More importantly, though, we will not be able to help ourselves from regarding AIs that are designed to act like humans as life. The implications are profound, and I will explore this in my next posts, so please join me!
Comments