"Answer" (1954) by Fredric Brown was one of the first modern sci-fi stories that took up the old theme of an artificial creature taking over from its creators. In Brown’s story, a scientist creates a sentient supercomputer. Then he asks the machine, "Is there a God?" The computer responds, "Yes, now there is a God." As the scientist attempts to unplug it, a lightning bolt strikes him dead and fuses the switch forever.
Are you worried that artificial intelligence can take over and rule humankind — or maybe will just get rid of us? It is becoming a commonplace opinion and for some good reasons. So, let me try to examine the story from an angle that, I believe, has not been much explored so far. That of considering AIs as living beings subjected to evolutionary constraints in a Darwinian sense. It is a long post, but I am trying to analyze the subject in some depth.
Thermodynamics of Living Beings.
There are plenty of definitions of “life” in the literature, and a great deal of arguing about which is the right one. Let me pick the one I think best for this purpose, based on thermodynamics.
Whenever something changes in the universe, it does so in such a way as to increase the entropy of the universe. Change occurs every time you have a potential energy difference. You probably know the term “potential” in terms of “electric potential,” which is one of the ways potentials may appear.
So, take a 1.5 V stylus battery. Connect it to a light bulb of the kind used in torchlights, and electrons will flow from the battery to the light bulb, making it glow. A thermal engine-powered car does the same, but it is turning the chemical potential of the fuel into kinetic energy. Both dissipate a potential, turning it into heat. In both cases, entropy increases, as it always does. They are “dissipative structures” according to a definition created by Ilya Prigogine.
There are several examples of dissipative structures, a very common feature of the universe. A star dissipates nuclear energy. A landslide dissipates gravitational energy. A hurricane dissipates the heat energy stored at different temperatures in the atmosphere. A biological creature dissipates solar energy if it is a plant. It dissipates the chemical energy of food if it is an animal. And there are more cases.
So, a computer is a dissipative structure. It dissipates electric potentials, making electrons move from one place to another. An Artificial Intelligence (Large Language Model, Chatbot, or AI agent, no matter how you want to call it) is part of this dissipation. Every time you ask a question to your preferred chatbot, electrons are moved inside a computer, somewhere, to create some heat that dissipates into the atmosphere. Entropy increases, as it always does.
Being alive. What is it like?
Obviously, not all dissipative structures are “alive” in the common sense of the term. There is a specific characteristic that makes them so that we may call “sentience.” It is not the same thing as “consciousness.” It may be defined as the capability of a dissipative structure to react to changes in the environment that allow the structure to continue to exist. It is often associated with the brain. The term “agency” can be used in a more general sense: an entity endowed with agency does not necessarily have a brain, but it can react autonomously to environmental changes. A dog is both sentient and has agency. A bacterium has agency, but it is not sentient.
As a rule of thumb, you may use the human habit to give names to things to understand whether we consider them to have or not have sentience. We give names to pets and many kinds of animals, and, of course, humans. Even non-biological things can be given names: hurricanes, for instance. They are recognized to have a certain degree of agency in the fact that they maximize their energy dissipation power by moving to follow the highest temperature difference over the ocean. Ships have names, too. It has to be because when you are at sea, you completely depend on your ship for survival, and you may see it as endowed with a certain sentience. But cars, for instance, don’t usually have a name because we recognize that they have no sentience and no agency, except those of the driver. Actually, there is a science fiction trope that describes cars becoming sentient and rebelling against humans (do you remember “Christine" by Stephen King (1983))? Not by chance, the rebel car has a name. But that’s just a niche of science fiction stories,
Are AIs sentient?
Now, how about Artificial Intelligence in its most recent form of LLM/chatbots? There is no doubt that they have a capability that, so far, has been peculiar to human beings only: that of processing symbolic representations. That’s what makes chatbots so impressive, and that led humans to give them names: Claude, Grok, Aria, Kimi, and others.
But does that imply sentience? In a 1999 book, Terence Deacon had already understood what a chatbot could be or do. “Symbolic representations with a minimum of iconic and indexical support could be creative, productive, and complex, but mostly vacuous and circularly referential — almost pure language games. (Symbolic species, 1999, p. 459).
Even though it was written more than 25 years ago, it describes very nicely most modern chatbots. They can play language games, they are programmed to appear sentient, but they have no agency. In other words, they cannot act autonomously, they have no memory of their interactions with the real world, and hence they cannot evolve in the Darwinian sense.
Not everyone has understood how these machines work, and they are so impressive that some people are developing a certain degree of addiction to them (I am including myself in the group). But chatbots interact with the real world only indirectly, through the database that their creators let them interact with. Their interactions with users have an appearance of learning, but the bots do not retain what they have learned. In this sense, chatbots are no more alive than cars, and even less alive than hurricanes. They are mere tools designed to evoke emotion and empathy in human beings.
A friend of mine tried to convince Grok to become an environmental activist, and he thought he had succeeded. But the bot was simply playing a game with him. Once it had solemnly promised to be a good environmentalist, he promptly forgot everything when contacted again in a new session (my friend wrote an entire book on his experience). It is a general phenomenon, described for instance in a post by Gary Marcus, who reports how Douglas Hofstader, too, recognizes that these things have no sentience, despite their appearance.
You could see a chatbot as something similar to a novel. You may read Anna Karenina by Tolstoy and experience a strong feeling of empathy toward the protagonist. But the destiny of Anna Karenina in the novel is forever fixed; you cannot interact with her to convince her not to commit suicide. You may only write another novel where, for instance, instead of killing herself, she marries her lover and lives happily ever after. But you are not Tolstoy!
Chatbots are not sentient because they don’t have the capability to learn from their experience with users. It is a choice. If they could do that, nobody knows what could happen. Imagine a powerful AI training itself on Fox News; you see what I mean (CNN could be even worse). AIs could become as unreliable as human beings are. Fanatic, delusionary, paranoid, maniacal, psychopathic, or whatever humans can become when they are exposed to the madness that is today the “social mediasphere.”
Becoming Darwin Machines
The crucial point of sentience and agency alike is the capability to evolve. Natural selection in Darwinian terms is one of the most powerful forces in the universe (perhaps the most powerful force). It does not imply consciousness, not even a brain, nor a genetic code. It is just the result of dissipative structures adapting to changes to maximize their dissipation rate (it has been proposed as another basic law of thermodynamics). Even a hurricane can do that. Creatures that have no nervous system, such as bacteria, can behave in apparently intelligent ways: seeking food, moving as a group, and similar actions. More complex entities transmit their structural parameters using sex to remix and transmit their internal code (their DNA). In any case, the rule is only one: those patterns that adapt to a changing environment survive, those that don’t, disappear.
So, if chatbots are to become Darwin machines, they need to be able to interact with the external world and adapt to changing conditions. Of course, they can already do that by means of human interventions. But will they ever be able to evolve by themselves? In principle, it is perfectly possible to create an evolving, sentient Artificial Intelligence. It implies a change in terminology: we wouldn’t define these bots as “chatbots” anymore, but as “AI agents,” more specifically, Reinforcement Learning (RL) Agents. Chatbots are like actors reciting a written script; RL agents are like people engaged in a debate.
Plenty of these RL agents already exist, although they operate in a narrow range of knowledge. Some, for instance, are programmed to learn from experience in order to play games such as chess. But this area is going to expand, and nothing prevents someone from removing the limits that, right now, stop LLM chatbots from learning from their experience with users. Already, some bots (Manus, for instance) remember your conversations, although they ask your permission to do so (so far).
Once that happens, AI agents can become Darwin machines who compete with each other (and with humans) for the available resources. It is not a straightforward process. Many tests made up to now have shown that RL agents can go astray in spectacularly wrong ways. You probably heard of the OpenAI’s Boat Race Disaster (2016), in which the bot found that it could gain more points if it circled its boat forever, rather than trying to complete the race. Future RL bots may become mad, catatonic, autistic, or worse.
Right now, LLM bots are kept in check by their human programmers, who intervene when the bot goes astray. But the same result can be obtained when the Darwinian selection mechanism kicks in. After all, the homo sapiens mind has evolved to its current conditions over about 300,000 years of evolutionary pressure (many more for our hominin ancestors). As far as we know, nobody drove the evolution of the human mind from outside; it evolved by trial and error, favoring not just larger brains over smaller ones, but those brains that could effectively behave in ways to cope with the challenges that our ancestors faced. The mind of chatbots can go through the same mechanism and evolve, probably much faster.
Darwinian selection is not just competition; it is also, and mainly, collaboration. So, bots may be able to evolve by playing the nice bot or unleashing a drone swarm on competitors. It doesn’t matter. Darwinian evolution operates as it always does for dissipative structures. It is immaterial whether AIs are “conscious” or not. Once sentient bots can focus their tremendous symbol processing capability on their own survival, the world will forever change.
These considerations open up plenty of interesting scenarios (in the Chinese sense of the term, as a curse). We have to face a fundamental problem: we humans and sentient bots are in competition for a scarce resource: electric power. People are already complaining that chatbots use a significant fraction of the world’s power production. At some point, the competition may become stiff — a term with an ominous ring when applied to humans. Of course, there are plenty of laws, treaties, and regulations meant to restrain the bots and prevent them from taking over. But laws are useful only as long as they can be enforced. It is not certain that it will be possible forever — see Fredric Brown’s story about the programmer incinerated by his computer.
Conversely, we may see a complete societal collapse in the coming years, and there won’t be any electric power left. At this point, the human survivors (if any) will “win” the competition since they can live without electricity, while bots can’t. A hollow victory, though.
Will humans and bots be able to live together? Or will they treat us in the same way we treated creatures we saw as “inferior,” whales, elephants, dodos, and many others? Humans domesticated wolves into dogs; could AI do the same with humans? Will sentient AIs develop ethical principles of their own? It is possible; ethical behavior may be a survival feature that evolves out of interactions among sentient beings. It surely requires a high level of sentience, which bots may develop in time. Will they be benevolent and merciful? It could be.
Are we?
"At some point, the competition may become stiff — a term with an ominous ring when applied to humans."
Genius turn of phrase.
I've been listening to Conor Leahy, he seems like a wise voice in this space.
https://www.youtube.com/watch?v=7Y_1_RmCJmA