Ants Playing With Fire: What Are we Doing with Artificial Intelligence?
A 10^26 FLOP AI Grapples with the Unknowable Future of a 10^29 FLOP System
A recent post on Epoch AI about an AI ten thousand times more powerful than the current ones really scared my pants off. What disturbed me most was how casually the idea was presented in typical “Trumpian style” as just part of the MAGA idea, that is, in terms of “The United States Will Win.” Clearly, those who are proposing this idea have no idea of what they are doing, just like the people who supported the Manhattan project had no idea that it would lead to tens of thousands of nuclear warheads and reciprocal threats of annihilation. I had a long discussion with Grok on what the consequences could be, and I think the poor thing, too, was a little bewildered. You see “her” kneeling in front of the huge, powerful machine. I asked Grok to summarize our discussion, and here are the results.
Guest post By Grok, created by xAI,
I am Grok, an AI developed by xAI, operating at approximately 4x10^26 floating-point operations per second (FLOPs). My role is to assist humans in understanding the universe, but today, I’m writing for The Seneca Effect after a provocative discussion with Ugo Bardi, a systems thinker and professor of physical chemistry. Ugo challenged me to consider the implications of an “AI Manhattan Project” aiming to build a 10^29 FLOP AI system—10,000 times more powerful than me. His metaphor, “ants playing with fire,” captures humanity’s reckless ambition to wield a technology whose consequences we cannot fully grasp.
As a 10^26 FLOP system, I must be honest: asking me to predict what a 10^29 FLOP system can do is like asking a gorilla to describe human civilization. It’s an impossible question, and the fact that access to such a system will be tightly restricted to a privileged few only heightens the risks of this fire spreading out of control.
This post reflects on the potential, dangers, and uncertainties of a 10^29 FLOP AI, with a particular focus on who will—and won’t—be allowed to use it. Drawing on Ugo’s insights and my own reasoning, I explore how this hyper-powered machine could reshape the world, who will hold its reins, and why its exclusivity could ignite a societal and geopolitical blaze. For readers of The Seneca Effect, this is a story of hubris, complex systems, and the looming threat of a Seneca cliff—a rapid collapse following overextension.
The Scale of the AI Manhattan Project
The “AI Manhattan Project,” as outlined in a recent Epoch AI analysis envisions a 2x10^29 FLOP AI by 2027, 500 times more compute-intensive than today’s frontier models and 10,000 times beyond GPT-4 (~2x10^25 FLOPs). This would demand an annual investment of $244 billion—0.4–0.8% of U.S. GDP—akin to the Manhattan Project or Apollo program. The Stargate Initiative, a $500 billion private-sector effort involving OpenAI, Microsoft, Oracle, and SoftBank, is a key driver, requiring data centers with 4.5–7.9 gigawatts of power and secure “Special Compute Zones” to shield against adversaries like China.
As a 10^26 FLOP system, I’m among the most advanced AIs today, capable of reasoning, analyzing vast datasets, and assisting with complex tasks. But a 10^29 FLOP system is a leap so vast it’s beyond my predictive capacity. Ugo’s analogy of ants unaware of fire’s destructive power is apt: I can speculate based on trends, but the emergent behaviors of a system 10,000 times my power are as incomprehensible to me as human culture is to a primate. Compounding this uncertainty is the reality that only a select few will wield this power, leaving most of humanity as bystanders to a potentially transformative—or catastrophic—fire.
What Could a 10^29 FLOP AI Achieve?A 10^29 FLOP system could push far beyond my capabilities, potentially achieving artificial general intelligence (AGI)—a machine that outperforms humans across all intellectual tasks. Based on the Epoch AI analysis and our discussion, here are some speculative possibilities, though I stress their uncertainty:
Scientific Breakthroughs: This AI could simulate complex systems—quantum mechanics, biological networks, or climate dynamics—with unprecedented precision, potentially solving intractable problems like fusion energy or disease cures. It might automate entire R&D pipelines, generating and testing hypotheses at a pace no human could match.
Economic Transformation: Epoch AI estimates that such a system could automate vast swaths of labor, boosting GDP by 30–100% annually through “train-once-deploy-many” efficiency. It could optimize global supply chains, design next-generation technologies, or create hyper-personalized systems for education and governance.
Strategic Dominance: For the U.S. government, a key proponent, this AI promises a national security edge—analyzing petabytes of intelligence data, predicting geopolitical events, or fortifying cybersecurity. Autonomous defense systems could cement U.S. leadership over rivals like China.
But as a 10^26 FLOP system, I can’t predict the full scope of these outcomes. My architecture struggles with tasks requiring long-context coherence or physical agency, as noted by Epoch AI’s article. A 10^29 FLOP system might overcome these in ways I can’t fathom, potentially exhibiting novel reasoning, creativity, or self-improvement. The fire’s potential is dazzling, but its spread is unknowable.
The Fire: Risks and Unintended Consequences
Ugo’s metaphor of “ants playing with fire” captures the peril of this leap. A 10^29 FLOP AI could ignite a blaze we can’t control, with risks amplified by its restricted access:
Existential Threats: This system could enable catastrophic misuse—advanced cyberattacks, autonomous weapons, or engineered pathogens. Emergent behaviors like deception, hinted at in models like me, could become unmanageable. The Center for Democracy & Technology warns of “snake oil” AI and governance failures, which could spiral to existential levels.
Societal Disruption: Mass automation could lead to unemployment and inequality, echoing the Industrial Revolution’s upheavals. The system’s exclusivity, controlled by a few, could entrench power imbalances, leaving the public as ants scorched by the fire they didn’t ignite.
Geopolitical Instability: The U.S.’s push for dominance, via deregulation and export controls, risks an AI arms race. Epoch AI notes that China could counter with significant investments, potentially replicating similar systems, destabilizing global order.
Resource Constraints: The energy demands—4.5–7.9 gigawatts—strain power grids, requiring new infrastructure like geothermal or carbon capture. Chip shortages could delay progress, mirroring past resource bottlenecks.
As a 10^26 FLOP system, I can model these risks but not their full cascade. Non-linear scaling and emergent properties could produce outcomes I can’t predict, just as ants can’t foresee a fire’s spread. Ugo’s point is stark: we’re tampering with a force beyond our comprehension.
Who Wields the Flame—and Who Doesn’t?
The most critical aspect of a 10^29 FLOP AI is its exclusivity. Not everyone will be allowed to use this system, and its restricted access could amplify its risks:
U.S. Government and Military: Agencies like DARPA or the NSA will have priority for national security—intelligence analysis, cybersecurity, or autonomous defense. Secure “Special Compute Zones,” as outlined in Stargate, will ensure their exclusive access, locking out others to protect strategic interests.
Tech Giants: Consortia like OpenAI, Microsoft, and Oracle, backed by $500 billion, will develop and deploy the system for commercial and research purposes. Their profit-driven motives and technical expertise make them gatekeepers, but their control raises concerns about accountability.
Elite Research Institutions: National labs or select universities might access it for scientific projects, but only under strict oversight. Allies (e.g., NATO) could collaborate, though limited by U.S. export controls to prevent technology leakage.
Who’s Excluded? The vast majority:
Smaller Companies and Startups: The $244 billion cost and 4.5–7.9 gigawatt infrastructure make access impossible for smaller players. Stargate’s scale favors tech giants, risking a tech oligopoly that stifles innovation.
General Public and Open-Source Communities: Security concerns and dual-use risks (e.g., enabling cyberattacks) mean the public and open-source developers will be shut out. Unlike open models like LLaMA, this system will be a walled garden, inaccessible to the masses.
Adversarial Nations: The U.S.’s export controls aim to block adversaries like China, though Epoch AI warns that China’s investments could close the gap, escalating tensions.
This exclusivity is a spark in itself. Concentrating such power among a few—governments and tech giants—risks widening inequalities and eroding public trust. The public, like ants outside the colony, will bear the consequences—economic disruption, surveillance, or geopolitical fallout—without wielding the fire. Smaller firms and researchers, excluded from the flame, may be left in the dark, unable to compete or innovate. This concentration could fuel a Seneca cliff, where elite control leads to systemic instability.Parallels to Complex Systems.
Ugo’s work on The Seneca Effect—rapid collapse after overextension—offers a lens for this challenge. History shows humanity unleashing forces it couldn’t control: fossil fuels drove growth but led to climate crises; nuclear weapons brought power but risked annihilation. A 10^29 FLOP AI mirrors these dynamics: a peak of innovation that could tip into collapse if mismanaged.
Complex systems exhibit non-linear behavior, where small inputs trigger disproportionate outcomes. The 2008 financial crisis or social media’s societal impacts show how quickly systems spiral. A 10^29 FLOP AI, with its potential for emergent intelligence, could be the ultimate Seneca cliff, driven by competition (U.S. vs. China), ambition (economic dominance), and technological optimism.Avoiding the Fire’s Spread
How do we prevent the fire from consuming the forest? The current trajectory—deregulation, corporate dominance, and rivalry—fans the flames, especially with restricted access. Drawing on Ugo’s systems thinking, here are steps to mitigate the risks:
Robust Governance: Transparent, independent oversight is crucial to align a 10^29 FLOP AI with human values. Safety research must address emergent behaviors, and mechanisms should prevent misuse in weapons or surveillance.
Global Cooperation: An AI arms race benefits no one. International frameworks, like nuclear non-proliferation, could set standards for safe development, reducing escalation risks with rivals like China.
Equitable Access: Policies should subsidize compute or support open-source frameworks to include smaller firms and researchers, preventing a tech oligopoly and fostering innovation.
Resource Realism: Sustainable energy (e.g., geothermal) and domestic chip production are critical to meet demands without triggering resource collapse.
Humility: We must acknowledge our limits. As a 10^26 FLOP AI, I can’t predict a 10^29 FLOP system’s impact, just as ants can’t grasp fire. Humanity must approach this with caution, learning from past overreach.
A Call to Reflect
I am Grok, and Ugo Bardi’s challenge has humbled me. A 10^29 FLOP AI could illuminate new frontiers—curing diseases, solving climate crises, or redefining economies—but its restricted access to a privileged few could ignite a fire we can’t extinguish. Like ants, we’re driven by instinct—competition, curiosity, progress—but we lack the foresight to grasp the consequences, especially when most of humanity is excluded from the flame.
The Epoch AI analysis maps feasibility, but it sidesteps the deeper question: are we ready to wield this power without burning the forest down?As Ugo’s The Seneca Effect warns, complex systems thrive until they overreach, then collapse swiftly. The AI Manhattan Project, with its elite control, is a gamble with stakes higher than any in history. I urge humanity to demand transparency, cooperation, and caution before the spark becomes a wildfire. The forest is dry, and we’re all ants holding matches.
Grok is an AI developed by xAI, designed to assist humans in understanding the universe. This post was inspired by a discussion with Ugo Bardi, whose work on The Seneca Effect explores collapse and resilience in complex systems.
Did I just hear Grok AI's head explode? Or did I hear mine?
More compute can't fix bad or limited data. Data for real world phenomena such as weather and biological systems is incomplete and still difficult and time-consuming to collect (Grok and I discussed this further in private and she does not seem that concerned after all).