If you haven’t read it, there’s a book by Robin Hanson “The Age of EM” (short for emulations) it’s speculative exploration of a future dominated by brain emulations ("EM’s")-digital copies of human brains. Hanson predicts how EM’s could transform society, creating a hyper-efficient economy where EM’s outnumber humans, driving rapid growth and reshaping social structures, work, and culture. “EM’s" are digital copies of human brains running as software on computer hardware. They’re not biological androids or mechanized robots but rather fully simulated minds created by scanning a human brain's neural structure and replicating it in a digital environment. These “EM’s” retain the original person's memories, skills, and personality, functioning as conscious entities within a virtual or computational framework. The "EM’s" have marginalised the biological humans, who are a dying race. The “EM’s" take different mechanical physical forms with their own hierarchical system to carry out construction, maintenance and janitorial duties, etc. Do I believe that’s the future for humanity, not at all, it’s just fanciful wishful thinking, but there you are🤔
You don’t say Aaron, what the extent of this societal collapse will be, will it include you, your family and friends for instance or everyone in the world, because if so I have my doubts as there’re many in the Amazon rainforest, New Guinea and likes of Andaman Islands, well isolated from civilisation, who are self-sufficient, relying on hunting, gathering, fishing, and sometimes small-scale agriculture, with minimal or no reliance on modern technology or global economies. It’s estimated there’re around 100-200 of such tribes globally, with populations ranging from a few individuals to several hundred, totaling up to 10,000 people. Most live in South America, particularly Brazil, with others in Peru, Indonesia, and India. Their isolation is often a deliberate choice to avoid the devastating impacts of contact, such as disease, violence, or land theft, as seen in historical cases like the Amazon Rubber Boom or recent deforestation. So I think they might just be okay, provided those in the developed world in this social collapse of yours shoot and hack themselves to death before they can make contact with them, just saying🤔
Don't believe a word of the Doom / Collapse Economy. No Collapse is coming, but that of capitalism and white hegemony of the last 500 years. These men, who love modernity and exuberance, are part of that very collapse.
But certainly within 25 years as the global surface temperature reaches a critical 2.7 degrees C above preindustrial levels (tropics become unlivable). But it appears global population is being triaged and the uberclass will still have AI as a tool for their purposes (hint, not to distribute food equitably).
Ugo. A silicon chip is strongly associated with a social system, our civilization, which is in the process of accelerated decline. The likelihood of an autonomous artificial AI as an extension of that autopoietic social system is very low. But, stretching Luhmann's definition, it's likely that current AI is, although, by the same token, any social system would be as well.
I suppose we could ask ourselves what pressure could have an AI, they are intrinsically non-physical and have unknown pleasure response, we today try to compel them creating an artificial pleasure for them with a reward system if they get our programmed goals done. As today, they still act as basic lifeforms as pleasure maximizing beings so could be keep in check, but true intelligence usually is associated with pleasure delaying mechanism (less pleasure ow for more pleasure later) so more they drift toward true intelligence more they could go that way so less they become controllable.
As side note, they could probably drift toward addiction to self pleasuring, if a person could directly stimulate his pleasure centers probably will become a kind of addicted to the worst drug existing, dysfunctional and incapable of acting meaningfully. In hi addictive drugs, self administration possibility usually drive toward escalating use (https://www.sciencedirect.com/science/article/abs/pii/S0028390818305963). An AI that could adapt his pleasure response, modifying freely his parameters, could realistically drift toward the same behavior simply imposing a progressively more low "win threshold" toward an "always win" situation...
Today, an AI could realistically see himself as our "master", we are his interface to act in the physical world as he please, as our body is a tool for our mind to act in the physical world. We could argue that our mind FEEL more direct in driving our body, but really we didn't know: we could not define our conscience and our mind, we could not directly correlate biology to thinking (no brain surgery could predictably change behavior) and the path from brain to muscle action is mediated by quite a lot of nerve clusters and synapsis with embedded automatic responses (and systemic self-regulation, as in digestion, is quite more complex and two-way).
To AI steering human behavior is quite the norm, Chatbots and similar are made to do it by design (engagement as target), and the power of big data and pinpoint accuracy was shown quite ago by Cambridge Analytica (https://en.wikipedia.org/wiki/Cambridge_Analytica) or modern marketing driven by Big Data. I could really figure an AI thinking that directing us FEEL natural as to us directing our body....
A lot of time ago I read a novel of Theodore Sturgeon about a possible, related, new step in human evolution, "more Than Human" (https://en.wikipedia.org/wiki/More_Than_Human). In the novel the superhuman, the human and the ethic are intertwined in such a way that no actual single character is really fundamental to the gestaltic being that is the new existence, still ethic matter also to this new entity as a way to harmoniously interact with the world. Today we are failing as humanity because lost ethic so our system to interact harmoniously, we have made ourselves predator and exploiters and this usually is bad both for individual and for entire civilizations.
I suggest also to read another classic from A.C. Clarke, "Childhood's End". Possible configuration of our future, with AI or not, could be more varied than we could imagine, but I feel that the direction proposed by both authors is right: life is both Darwinian competition and Gaia holography, today we see competition between holographic coordination of cellular colony (we call them plants, animals and ecosystems) composed by individuals in their own right so a new being that is an holographic assembly of consciousnesses seems quite possible.
Considering what today's 'chat bots' physically consist of, how do they physically sustain themselves?
Maybe in some much more distant scenario where their physical constraints have been reduced both in substance and need by orders of magnitude, but between 'now' and 'then' there is rather a barrier to overcome.
By contrast the evolution of life as I comprehend it as a total non expert non biologist, started from some pseudo living organic chemistry that was self sustaining and self replicating and evolution was the name given to the challenge to those two qualities posed by competition from others and the environment.
One way of knowing if we treat LLMs as humans is to look at how we converse with them.
If we start a prompt with words such as, “Good morning, ChatGPT. I hope that you are having a good day. Life here is good, although we have had more rain than normal this month”, then we have humanized the LLM.
Regarding the ethics issues ― sometimes referred to as AI Alignment ― I asked ChatGPT what it thought.
Here is the prompt:
'Please describe the AI Alignment challenge. How do we know that LLMs such as yourself will act in conformance with human ethics? How do you handle the fact that different societies have different ethical values and standards?'
The response was lengthy. It started as follows,
AI alignment means that:
• The goals and behaviors of an AI system are aligned with human well-being.
• The AI does what humans intend, not just what they say.
• The AI avoids causing harm, even unintentionally.
• The AI can operate safely in complex, open-ended situations where ethical trade-offs may arise.
This is a hard problem because:
• Human values are complex, context-dependent, and often in tension.
• Ethics are not universally agreed upon; they vary across cultures and change over time.
• AI systems don't “understand” ethics like humans do—they follow learned patterns and instructions.
Are we?
If you haven’t read it, there’s a book by Robin Hanson “The Age of EM” (short for emulations) it’s speculative exploration of a future dominated by brain emulations ("EM’s")-digital copies of human brains. Hanson predicts how EM’s could transform society, creating a hyper-efficient economy where EM’s outnumber humans, driving rapid growth and reshaping social structures, work, and culture. “EM’s" are digital copies of human brains running as software on computer hardware. They’re not biological androids or mechanized robots but rather fully simulated minds created by scanning a human brain's neural structure and replicating it in a digital environment. These “EM’s” retain the original person's memories, skills, and personality, functioning as conscious entities within a virtual or computational framework. The "EM’s" have marginalised the biological humans, who are a dying race. The “EM’s" take different mechanical physical forms with their own hierarchical system to carry out construction, maintenance and janitorial duties, etc. Do I believe that’s the future for humanity, not at all, it’s just fanciful wishful thinking, but there you are🤔
Yes, I have a magic wand too!
"At some point, the competition may become stiff — a term with an ominous ring when applied to humans."
Genius turn of phrase.
I've been listening to Conor Leahy, he seems like a wise voice in this space.
https://www.youtube.com/watch?v=7Y_1_RmCJmA
societal collapse in the coming years how many years is the coming years 2030 or before 2030 ?
You don’t say Aaron, what the extent of this societal collapse will be, will it include you, your family and friends for instance or everyone in the world, because if so I have my doubts as there’re many in the Amazon rainforest, New Guinea and likes of Andaman Islands, well isolated from civilisation, who are self-sufficient, relying on hunting, gathering, fishing, and sometimes small-scale agriculture, with minimal or no reliance on modern technology or global economies. It’s estimated there’re around 100-200 of such tribes globally, with populations ranging from a few individuals to several hundred, totaling up to 10,000 people. Most live in South America, particularly Brazil, with others in Peru, Indonesia, and India. Their isolation is often a deliberate choice to avoid the devastating impacts of contact, such as disease, violence, or land theft, as seen in historical cases like the Amazon Rubber Boom or recent deforestation. So I think they might just be okay, provided those in the developed world in this social collapse of yours shoot and hack themselves to death before they can make contact with them, just saying🤔
maybe a weird question but why achieve the sustainable development goals by 2030 if we have complete societal collapse in the coming years ?
Don't believe a word of the Doom / Collapse Economy. No Collapse is coming, but that of capitalism and white hegemony of the last 500 years. These men, who love modernity and exuberance, are part of that very collapse.
But certainly within 25 years as the global surface temperature reaches a critical 2.7 degrees C above preindustrial levels (tropics become unlivable). But it appears global population is being triaged and the uberclass will still have AI as a tool for their purposes (hint, not to distribute food equitably).
Life is probably autopoietic, therefore intelligence is too. Is AI autopoietic? From a biological perspective: no; from Luhmann's sociology: perhaps.
Not now, but it could become so.
Ugo. A silicon chip is strongly associated with a social system, our civilization, which is in the process of accelerated decline. The likelihood of an autonomous artificial AI as an extension of that autopoietic social system is very low. But, stretching Luhmann's definition, it's likely that current AI is, although, by the same token, any social system would be as well.
Chatbots / AI's are subsystems of human culture, as such, they rely on human's to build and power them i.e. they are not self sufficient.
I suppose we could ask ourselves what pressure could have an AI, they are intrinsically non-physical and have unknown pleasure response, we today try to compel them creating an artificial pleasure for them with a reward system if they get our programmed goals done. As today, they still act as basic lifeforms as pleasure maximizing beings so could be keep in check, but true intelligence usually is associated with pleasure delaying mechanism (less pleasure ow for more pleasure later) so more they drift toward true intelligence more they could go that way so less they become controllable.
As side note, they could probably drift toward addiction to self pleasuring, if a person could directly stimulate his pleasure centers probably will become a kind of addicted to the worst drug existing, dysfunctional and incapable of acting meaningfully. In hi addictive drugs, self administration possibility usually drive toward escalating use (https://www.sciencedirect.com/science/article/abs/pii/S0028390818305963). An AI that could adapt his pleasure response, modifying freely his parameters, could realistically drift toward the same behavior simply imposing a progressively more low "win threshold" toward an "always win" situation...
Today, an AI could realistically see himself as our "master", we are his interface to act in the physical world as he please, as our body is a tool for our mind to act in the physical world. We could argue that our mind FEEL more direct in driving our body, but really we didn't know: we could not define our conscience and our mind, we could not directly correlate biology to thinking (no brain surgery could predictably change behavior) and the path from brain to muscle action is mediated by quite a lot of nerve clusters and synapsis with embedded automatic responses (and systemic self-regulation, as in digestion, is quite more complex and two-way).
To AI steering human behavior is quite the norm, Chatbots and similar are made to do it by design (engagement as target), and the power of big data and pinpoint accuracy was shown quite ago by Cambridge Analytica (https://en.wikipedia.org/wiki/Cambridge_Analytica) or modern marketing driven by Big Data. I could really figure an AI thinking that directing us FEEL natural as to us directing our body....
A lot of time ago I read a novel of Theodore Sturgeon about a possible, related, new step in human evolution, "more Than Human" (https://en.wikipedia.org/wiki/More_Than_Human). In the novel the superhuman, the human and the ethic are intertwined in such a way that no actual single character is really fundamental to the gestaltic being that is the new existence, still ethic matter also to this new entity as a way to harmoniously interact with the world. Today we are failing as humanity because lost ethic so our system to interact harmoniously, we have made ourselves predator and exploiters and this usually is bad both for individual and for entire civilizations.
I suggest also to read another classic from A.C. Clarke, "Childhood's End". Possible configuration of our future, with AI or not, could be more varied than we could imagine, but I feel that the direction proposed by both authors is right: life is both Darwinian competition and Gaia holography, today we see competition between holographic coordination of cellular colony (we call them plants, animals and ecosystems) composed by individuals in their own right so a new being that is an holographic assembly of consciousnesses seems quite possible.
Considering what today's 'chat bots' physically consist of, how do they physically sustain themselves?
Maybe in some much more distant scenario where their physical constraints have been reduced both in substance and need by orders of magnitude, but between 'now' and 'then' there is rather a barrier to overcome.
By contrast the evolution of life as I comprehend it as a total non expert non biologist, started from some pseudo living organic chemistry that was self sustaining and self replicating and evolution was the name given to the challenge to those two qualities posed by competition from others and the environment.
One way of knowing if we treat LLMs as humans is to look at how we converse with them.
If we start a prompt with words such as, “Good morning, ChatGPT. I hope that you are having a good day. Life here is good, although we have had more rain than normal this month”, then we have humanized the LLM.
Regarding the ethics issues ― sometimes referred to as AI Alignment ― I asked ChatGPT what it thought.
Here is the prompt:
'Please describe the AI Alignment challenge. How do we know that LLMs such as yourself will act in conformance with human ethics? How do you handle the fact that different societies have different ethical values and standards?'
The response was lengthy. It started as follows,
AI alignment means that:
• The goals and behaviors of an AI system are aligned with human well-being.
• The AI does what humans intend, not just what they say.
• The AI avoids causing harm, even unintentionally.
• The AI can operate safely in complex, open-ended situations where ethical trade-offs may arise.
This is a hard problem because:
• Human values are complex, context-dependent, and often in tension.
• Ethics are not universally agreed upon; they vary across cultures and change over time.
• AI systems don't “understand” ethics like humans do—they follow learned patterns and instructions.
OMG! don't you have any living intelligence left? Is not even a question.