The implicit fallacy here is that this has not been planned in detail ahead of time, like the German armored assaults, which swept across Europe from 1939 into 1942.
The perception of overwhelm by civilians of various ranks is not at the core of the process.
Media is not at the core of the process.
There is vast institutional corruption, protected from the outside, but not really protected from the inside, and Team-Trump is inside the system with forensic-audit AI and young male Geek-Commando teams camping in government offices, eating pizza.
There is, in any battle, intuition in the battle-space, the operational initiative which was the Prussian inheritance of German battlefield commanders of tank and armored combined-forces. All of the preparation and coordination is the strategy. The tactics can be adjusted in the moment, like Trump calling the insubordinate Zelensky a "dictator" and demanding "where's my $300 billion?"
Here's the insanity, John. Trump says "where's my $300B!!" Then other forces in the media say "Trump is a liar! It's only $200B, with $100B unaccounted for! We have to attack Trump!" Lost in semantics, arguing with an old man.
This is the PR-battlespace, and it does look like the Trumpian forces are prevailing at the moment. They need to appear to keep advancing the narrative, keep the "establishment" forces off balance and reactive, unable to seize the operational initiative.
My assessment is that Team-Trump needs to establish a working-relationship with Russia in this first 100 days, and that involves re-establishing the infrastructure of diplomacy, cutting out economic sanctions against Russian interests (some can be left for show) and starting to reimburse Russia for injuries, like Nordstream, while withdrawing US forces from former Warsaw-Pact countries. https://www.zerohedge.com/geopolitical/moscow-demanded-us-nato-withdraw-forces-eastern-europe-riyadh-talks
If the 2 biggest resource countries, Russia and the US, can ally, it ruins the traditional British strategy, and Germany can join back in, also.
"Computer models can enhance the modeling power of the human mind by managing the low-level task of keeping track of the evolution of the variables, while the modeler observes the behavior of the system as a function of changing external inputs or internal correlations."
Is there a public access website utilizing a computer model to analyze and present these changing external inputs and internal correlations in a dashboard format so that we, the public, can better understand the predicted results of public policy actions taken by our elected (and unelected) officials? If not, this would be an extremely useful tool for those of us with humility to apply rational thinking and discourse to counter the irrational "common sense" too often at play in our politics.
There is a successful PR war going on by Team-Trump right now, but what will really be sorted out by the forensic auditing AI will take much longer. There is a real pressure to get a lot done in the golden "first 100 days", which Team-Trump completely understands.
It is a palace-coup in the imperial court, and seems mostly legal and legitimate so far.
The big risk is what people with that power do after the coup is completed, is it not?
One might hope that coordination with Russia, China, India, Germany and so on can commence, as seems to be intended now, but a lot of bad turns can also be taken.
"Throughout history, tyranny has been limited by human frailty. Dictators tire. Their attention wanders. They must sleep. Even the most sophisticated systems of oppression ultimately relied on human beings to maintain control—people who could be corrupted, convinced, or who might simply look the other way at crucial moments. Resistance always remained possible because no human system of control could achieve total perfection.
Artificial intelligence fundamentally changes this equation. For the first time in human history, we face the prospect of systems of control that never sleep, never tire, and never look the other way. AI-enabled surveillance can watch everyone, everywhere, all the time. AI systems can process vast amounts of data to identify patterns of resistance before they even fully form. AI can optimize systems of social control with inhuman precision, crafting personalized manipulation for every citizen."
Bad turns are inevitable as well as human error due to inattention, ignorance, hubris, etc. and that is why it is critical to utilize our current computing power to model proposed policy actions, whether they be EOs or legislation, and make informed decisions with high confidence rather than on intuition or leaving these decisions up to AI.
Global coordination is needed in any efforts to improve* our world, and any coordination requires at minimum a moderate level of trust. Based on Team-Trump's current and historical actions, not to mention Trump's own personal history, as well as current and historical actions, rhetoric, PR, etc. coming out of Russia, China, India, Germany, etc. one can interpret the current level of trust to be very low.
* "improve" depends on one's perspective, however in order to really make progress, base level improvements must be agreed upon as well as the indicators to measure and how to measure them.
"Tyrant" is an interesting position and label, applied by Roman financial oligarchs in the Republic, to persons such as Julius Caesar, who annulled certain debts which they held against other citizens, to strengthen the republic (transitioning to empire) at the expense of their own wealth and power.
Caesar was a complex and tragic historical figure, and perhaps a better human than those who murdered him, yet "Tyrant" has a negative connotation now.
The power of a tyrant is a danger, particularly after the tyrant has defeated adversaries, and has to move forward. The self-certainty and force of will required to do the first part of the job, are not usually softened by self-examination as full executive power is later assumed.
Going back to the technical matter of AI monitoring of all personal information flows and tailoring of micro-managing algorithms to each participant human, which you discuss... It began with the CIA "Total Information Awareness" project in the 1990s, and is presumably a well-matured operation. Google ads know which isle a shopper is on in a store, and tailor adds accordingly, for instance.
This has all been worked out well before 2008, and was officially entered as a "Battlespace" by the Pentagon in very early 2016, when I started getting heavily censored in my bcc email news compilations to 200 friends, which I had done for years before that. I later learned of the military policy change, but I had to make a blog to do what I had done before.
The blocking of email sends ended 3 days after the November 2016 election. The same cycle has recurred with each US election cycle, with censorship ending right after the election each time.
As early as Y2k, my brother in law had sent 2 emails to himself at different email addresses, identical, except for a 2 sentence mention of a bomb in one of them. That one took 6 minutes longer to arrive, whereas the one without the bomb-sentences was there in a couple of seconds.
The question arises in my mind, when discussing democratic decisions about how AI management should be instituted in our world, "what do you mean by 'we'"?
This is a particularly dumb column, TBH. Packer doesn't understand common sense, or really, the connection between "grounding validity" -- measuring things for yourself in the decision space -- and how based intuition is created. He doesn't understand it, because as an elite in a hierarchical structure, the last thing that social system wants is for people making their own agentic decisions. It has little or nothing to do with Forrester's work, other than to seek to create mumbo jumbo for more low responsibility behavior. Which then maps back into the dynamics favored by the social system.
I did not mean to imply that we are not all limited, one way or another, by mental models assembled according to individual experiences. If I was a psychologist, which I am not. I would ask what factors lead to zealotry, reliance on ideology or ancient religious texts, etc., and thus the resulting absence of essential uncertainty, self-awareness, empathy, etc. in mental models. Stephen Miller recently gave an interview on CNN in which he gleefully (to the point that the interviewer asked him to calm down) about the miraculous funding and job cuts that were slashing rampant waste and fraud in the federal government. I so wanted to ask him if there was an algorithm that would potentially identify his job and federal salary as wasteful and fraudulent, and if he would regard such an assessment as capricious and arbitrary. Corporate and government managers do not reward uncertainty. But if you are unable to to go through that process in the privacy of your own mental models due to one of more of the factors above, the results are likely to be what we see in Washington or in cases of scientific fraud.
I wish there was an off-the-shelf computer model that would allow people to explore the potential results of new inputs and internal correlations. It would seem to be a better use of AI than summarizing reviews on Amazon. By the way, Dennis Meadows has acknowledged that LTG underplayed the role of energy in the model. But computer models can be improved. Mental models can be as well, although unfiltered social media have the effect of amplifying the negative effects of ideological or religious certainty. The smorgasbord of "facts" and claims means that we can shovel what appeals on to our plate while other options are ignored or discarded. Will "free speech" be our ultimate undoing?
George Box once said, ‘All models are wrong, but some are useful’.
The ‘Limits to Growth’ (LtG) models seem to fit that description. They provide a picture of the future that is accurate enough to be useful.
But even the most sophisticated and useful models are based on intuition to some extent. For example, the criteria used for the LtG model in the 1970s did not include climate change because that topic was still on the mental horizon. Now ‘common sense’ would tell us to include climate as a stand-alone issue, rather than including it with ‘environment’.
Jay Forrester envisioned a “World 2.0” well before The Limits To Growth (the cover photo). Alas, what an epic fail it was, however elaborate and scientific. Every state must fail, for the very way it views the entire planet or itself even.
Hi.. The fail part can be understood in various ways.
Forrester, like many of his age, or the school of thought, viewed the entire planet as management, as a system. Which it is not by any means, no matter how smart one can be in framing it.
In World 2.0 and later LTG, various arguments were presented in favour of population control. Global human population has grown 2x since the early 1970s. However they opined to control, it did not pan out so.
The folks involved (Forrester, Paul Ehrlich, Donella Meadows etc) as well meaning as they were and as educated and privileged, completely discounted the impact of capitalism and the imperialism of the West upon the “Rest”. Plus the entire global economy is based on Fossil capital and not just fossil fuels is entirely missing from Jay Forrester’s World 2.0
Lastly a range of white folks impose this very form of pathetic planetary management upon the entire rest of the planet. What models eh?!
The implicit fallacy here is that this has not been planned in detail ahead of time, like the German armored assaults, which swept across Europe from 1939 into 1942.
The perception of overwhelm by civilians of various ranks is not at the core of the process.
Media is not at the core of the process.
There is vast institutional corruption, protected from the outside, but not really protected from the inside, and Team-Trump is inside the system with forensic-audit AI and young male Geek-Commando teams camping in government offices, eating pizza.
There is, in any battle, intuition in the battle-space, the operational initiative which was the Prussian inheritance of German battlefield commanders of tank and armored combined-forces. All of the preparation and coordination is the strategy. The tactics can be adjusted in the moment, like Trump calling the insubordinate Zelensky a "dictator" and demanding "where's my $300 billion?"
Here's the insanity, John. Trump says "where's my $300B!!" Then other forces in the media say "Trump is a liar! It's only $200B, with $100B unaccounted for! We have to attack Trump!" Lost in semantics, arguing with an old man.
This is the PR-battlespace, and it does look like the Trumpian forces are prevailing at the moment. They need to appear to keep advancing the narrative, keep the "establishment" forces off balance and reactive, unable to seize the operational initiative.
My assessment is that Team-Trump needs to establish a working-relationship with Russia in this first 100 days, and that involves re-establishing the infrastructure of diplomacy, cutting out economic sanctions against Russian interests (some can be left for show) and starting to reimburse Russia for injuries, like Nordstream, while withdrawing US forces from former Warsaw-Pact countries. https://www.zerohedge.com/geopolitical/moscow-demanded-us-nato-withdraw-forces-eastern-europe-riyadh-talks
If the 2 biggest resource countries, Russia and the US, can ally, it ruins the traditional British strategy, and Germany can join back in, also.
"Computer models can enhance the modeling power of the human mind by managing the low-level task of keeping track of the evolution of the variables, while the modeler observes the behavior of the system as a function of changing external inputs or internal correlations."
Is there a public access website utilizing a computer model to analyze and present these changing external inputs and internal correlations in a dashboard format so that we, the public, can better understand the predicted results of public policy actions taken by our elected (and unelected) officials? If not, this would be an extremely useful tool for those of us with humility to apply rational thinking and discourse to counter the irrational "common sense" too often at play in our politics.
There is a successful PR war going on by Team-Trump right now, but what will really be sorted out by the forensic auditing AI will take much longer. There is a real pressure to get a lot done in the golden "first 100 days", which Team-Trump completely understands.
It is a palace-coup in the imperial court, and seems mostly legal and legitimate so far.
The big risk is what people with that power do after the coup is completed, is it not?
One might hope that coordination with Russia, China, India, Germany and so on can commence, as seems to be intended now, but a lot of bad turns can also be taken.
Mike Brock has a post, "The Final Despotism: When technology rewrites human freedom", https://www.notesfromthecircus.com/p/the-final-despotism. The two opening paragraphs:
"Throughout history, tyranny has been limited by human frailty. Dictators tire. Their attention wanders. They must sleep. Even the most sophisticated systems of oppression ultimately relied on human beings to maintain control—people who could be corrupted, convinced, or who might simply look the other way at crucial moments. Resistance always remained possible because no human system of control could achieve total perfection.
Artificial intelligence fundamentally changes this equation. For the first time in human history, we face the prospect of systems of control that never sleep, never tire, and never look the other way. AI-enabled surveillance can watch everyone, everywhere, all the time. AI systems can process vast amounts of data to identify patterns of resistance before they even fully form. AI can optimize systems of social control with inhuman precision, crafting personalized manipulation for every citizen."
Bad turns are inevitable as well as human error due to inattention, ignorance, hubris, etc. and that is why it is critical to utilize our current computing power to model proposed policy actions, whether they be EOs or legislation, and make informed decisions with high confidence rather than on intuition or leaving these decisions up to AI.
Global coordination is needed in any efforts to improve* our world, and any coordination requires at minimum a moderate level of trust. Based on Team-Trump's current and historical actions, not to mention Trump's own personal history, as well as current and historical actions, rhetoric, PR, etc. coming out of Russia, China, India, Germany, etc. one can interpret the current level of trust to be very low.
* "improve" depends on one's perspective, however in order to really make progress, base level improvements must be agreed upon as well as the indicators to measure and how to measure them.
Thanks for the thoughtful reply, Nicholas.
"Tyrant" is an interesting position and label, applied by Roman financial oligarchs in the Republic, to persons such as Julius Caesar, who annulled certain debts which they held against other citizens, to strengthen the republic (transitioning to empire) at the expense of their own wealth and power.
Caesar was a complex and tragic historical figure, and perhaps a better human than those who murdered him, yet "Tyrant" has a negative connotation now.
The power of a tyrant is a danger, particularly after the tyrant has defeated adversaries, and has to move forward. The self-certainty and force of will required to do the first part of the job, are not usually softened by self-examination as full executive power is later assumed.
Going back to the technical matter of AI monitoring of all personal information flows and tailoring of micro-managing algorithms to each participant human, which you discuss... It began with the CIA "Total Information Awareness" project in the 1990s, and is presumably a well-matured operation. Google ads know which isle a shopper is on in a store, and tailor adds accordingly, for instance.
This has all been worked out well before 2008, and was officially entered as a "Battlespace" by the Pentagon in very early 2016, when I started getting heavily censored in my bcc email news compilations to 200 friends, which I had done for years before that. I later learned of the military policy change, but I had to make a blog to do what I had done before.
The blocking of email sends ended 3 days after the November 2016 election. The same cycle has recurred with each US election cycle, with censorship ending right after the election each time.
As early as Y2k, my brother in law had sent 2 emails to himself at different email addresses, identical, except for a 2 sentence mention of a bomb in one of them. That one took 6 minutes longer to arrive, whereas the one without the bomb-sentences was there in a couple of seconds.
The question arises in my mind, when discussing democratic decisions about how AI management should be instituted in our world, "what do you mean by 'we'"?
This is a particularly dumb column, TBH. Packer doesn't understand common sense, or really, the connection between "grounding validity" -- measuring things for yourself in the decision space -- and how based intuition is created. He doesn't understand it, because as an elite in a hierarchical structure, the last thing that social system wants is for people making their own agentic decisions. It has little or nothing to do with Forrester's work, other than to seek to create mumbo jumbo for more low responsibility behavior. Which then maps back into the dynamics favored by the social system.
Bold of him to presume there will be midterm elections at all.
I did not mean to imply that we are not all limited, one way or another, by mental models assembled according to individual experiences. If I was a psychologist, which I am not. I would ask what factors lead to zealotry, reliance on ideology or ancient religious texts, etc., and thus the resulting absence of essential uncertainty, self-awareness, empathy, etc. in mental models. Stephen Miller recently gave an interview on CNN in which he gleefully (to the point that the interviewer asked him to calm down) about the miraculous funding and job cuts that were slashing rampant waste and fraud in the federal government. I so wanted to ask him if there was an algorithm that would potentially identify his job and federal salary as wasteful and fraudulent, and if he would regard such an assessment as capricious and arbitrary. Corporate and government managers do not reward uncertainty. But if you are unable to to go through that process in the privacy of your own mental models due to one of more of the factors above, the results are likely to be what we see in Washington or in cases of scientific fraud.
I wish there was an off-the-shelf computer model that would allow people to explore the potential results of new inputs and internal correlations. It would seem to be a better use of AI than summarizing reviews on Amazon. By the way, Dennis Meadows has acknowledged that LTG underplayed the role of energy in the model. But computer models can be improved. Mental models can be as well, although unfiltered social media have the effect of amplifying the negative effects of ideological or religious certainty. The smorgasbord of "facts" and claims means that we can shovel what appeals on to our plate while other options are ignored or discarded. Will "free speech" be our ultimate undoing?
George Box once said, ‘All models are wrong, but some are useful’.
The ‘Limits to Growth’ (LtG) models seem to fit that description. They provide a picture of the future that is accurate enough to be useful.
But even the most sophisticated and useful models are based on intuition to some extent. For example, the criteria used for the LtG model in the 1970s did not include climate change because that topic was still on the mental horizon. Now ‘common sense’ would tell us to include climate as a stand-alone issue, rather than including it with ‘environment’.
Jay Forrester envisioned a “World 2.0” well before The Limits To Growth (the cover photo). Alas, what an epic fail it was, however elaborate and scientific. Every state must fail, for the very way it views the entire planet or itself even.
Why a fail? Forrester proposed scenarios similar to the ones of "The Limits to Growth." even though his model was simpler.
Hi.. The fail part can be understood in various ways.
Forrester, like many of his age, or the school of thought, viewed the entire planet as management, as a system. Which it is not by any means, no matter how smart one can be in framing it.
In World 2.0 and later LTG, various arguments were presented in favour of population control. Global human population has grown 2x since the early 1970s. However they opined to control, it did not pan out so.
The folks involved (Forrester, Paul Ehrlich, Donella Meadows etc) as well meaning as they were and as educated and privileged, completely discounted the impact of capitalism and the imperialism of the West upon the “Rest”. Plus the entire global economy is based on Fossil capital and not just fossil fuels is entirely missing from Jay Forrester’s World 2.0
Lastly a range of white folks impose this very form of pathetic planetary management upon the entire rest of the planet. What models eh?!