Jorgen Randers, one of the original authors of the first report to the Club of Rome, “The Limits to Growth,” of 1972. Now, he is one of the main authors of the new report to the Club, “Earth for All”
This post is not meant to be an in-depth assessment of the Earth4All model but a general discussion on how to use forecasting models. I argue that no matter how sophisticated a model can be, it will always have shortcomings and that a flexible approach is normally the best. The Earth4All model is an integrated assessment model (IAM) linked to earlier efforts such as the “Limits to Growth” series of models, but it is a different approach, being more “goal-oriented” in the sense that it defines the policies needed to approach social, economic, and environmental goals. Facing an uncertain future, Earth4All provides a roadmap that we may or may not be able to follow, but it is part of our efforts to manage a better future for humankind.
You surely remember the story of Oedipus, who was foretold by the Pythoness of the Delphic Oracle that he would kill his father and marry his mother. Horrified, Oedipus ran away from the people he believed to be his parents and ended up unknowingly killing his real father and marrying his real mother. This story prefigures a problem that we are still facing nowadays: is the future predictable? And, if it is, does that mean it cannot be changed? When the story of Oedipus was written in the version we know today by Sophocles in the 5th century BC, oracles may have been enjoying the same kind of trust that in our times we reserve to “science.” Hence, the oracle’s words were presented as an absolute and unchangeable destiny.
In our times, we do not believe anymore that the future is determined by the actions of capricious and incorporeal entities called “Gods.” Instead, we think in terms of “universal laws,” entities just as incorporeal as the ancient Gods but less capricious (maybe). The existence of these laws made it possible to develop a deterministic view of the universe and to conceive a hypothetical creature, “Laplace’s Demon,” able to exactly determine the future on the basis of the knowledge of the initial conditions of all the particles that exist in the universe. It was Oedipus’s story retold in modern terms: the future is fixed and cannot be changed.
However, this “scientific” approach may be no more than a delusion of our age, not unlike the belief in oracles of ancient times. Finding a deterministic equation for bodies moving under the effect of gravity or other forces can be done only in a few simple cases, and even just three bodies of similar mass interacting with each other would give a terrible headache to Laplace’s demon. The unavoidable uncertainty in determining the initial conditions rapidly causes reality to diverge from computation. To say nothing about butterflies causing hurricanes by beating their wings or of Schrödinger’s cat whose life (or non-life) is probably even beyond the reach of the Pythoness of Delphi. So, our sophisticated, computer-based models are facing the same problems that plagued our ancestors more than two millennia ago. We should have learned that the more precisely a model tries to determine the future, the less reliable it is, but apparently, not everyone did (see, e.g., the recent debate on “hot models” in climate science).
The story of peak oil models is a good example of the failure of an Oedipus-like approach to the future. The supporters of the peak oil theory were often describing their approach as “scientific” in opposition to that of economists, who were said to use models not connected to the physical world (they were often disparaged as “flat-earthers). Indeed, in many respects, the peak oil model was better than the conventional ways of predicting oil production. It was based on geological data, and created the “bell-shaped” curve that today we call the “Hubbert curve,” often in good agreement with historical data. The problem was that the real world is subjected to many uncertainties that cannot be easily described by fixed parameters. New extraction technologies changed the situation, and while the initial global estimates identified the peak date as around 2000 (Hubbert 1956) and 2004-2005 (Campbell and Lahérrere, 1998), the actual global peak may have arrived only in 2018. But we cannot be sure yet.
There was nothing wrong with the idea of providing estimates for the date of a phenomenon that was destined to occur anyway. And nothing wrong with adapting models to a changing world. After all, we attribute the sentence to John Maynard Keynes: “When I have new data, I change my mind; what do you do, sir?” The problem was that the critics of the peak oil model misunderstood it even more than the supporters. When the peak didn’t arrive on the predicted date, they decided that the whole idea was wrong and that it would never arrive. They, too, saw the model in the “Oedipus mode,” placing too much attention on exact predictions.
Something similar occurred with the “Limits to Growth” (LTG) study of 1972. Given the parameters in input, the model calculated the future of the world’s economy, and the typical result was a crash that would occur before the end of the 21st century. The authors of “The Limits to Growth” didn’t fall into the trap that doomed the efforts of peak oil modelers. They correctly used the model as a starting point to discuss strategies to avoid the destiny that the model sketched. Nevertheless, the model was misunderstood and criticized even before it could be compared to the historical reality, as you can read in Ugo Bardi’s book “The Limits to Growth Revisited” (2011). More than 50 years later, it appears that the “base case” scenario of the LTG report caught the main trends of the real world up to now. However, the model can still be falsified in the sense that the trajectory of the world system may well diverge from the rapid decline envisioned by the model because of the fast development of renewable energy technologies. It is normal: models are not oracles; they are answers to the “what if?” question.
One of the several descendants of “The Limits to Growth” is the recently proposed Earth4All (E4A) model. Before discussing it, let me note that I haven’t been involved in its development, and my knowledge of it comes only from the available public sources. Let me also say that I don’t want to discuss the details of the model. I am just attempting a general assessment of its value based on my experience in modeling.
So, first of all, is E4A an “Oedipus-style” model? That is, does it design an unavoidable destiny for humankind? Clearly not, although it incorporates some of the physical constraints of the early LTG model. It is a different kind of approach, the E4A falls straight in the category of “Integrated Assessment Models” (IAM) which aim at guiding policies rather than making predictions. Like many IAMs, E4A provides data about what policies should be enacted to promote the well-being of humankind and the health of Earth’s ecosystem. This approach is well in line with the traditional approach of the Club of Rome from the time of its foundation by Aurelio Peccei in 1968.
Going more in-depth into the E4A model, the first point that I note is the lack of obvious “tipping points,” the kind of catastrophic events that lead to collapse in complex systems. We know that such tipping points exist in economics (they are called “financial collapses”) and in ecosystems (they are called, for example, “mass extinctions”). Tipping points are difficult to model using system dynamics. It is not that they can’t be computed, but they are affected by the “butterfly wings” effect – the results of the model depend so strongly on the initial parameters that they are useless as quantitative predictions. The creators of the E4A model surely knew about this problem, so they focused on the correct policies to keep the ecosystem and the economy away from tipping points. That doesn’t mean that tipping points do not exist, it is just that the E4A model was not developed as a tool to study them.
I can also note that the E4A model has no sharp limits in terms of resource stocks; rather, it makes some assumptions about resource depletion that many dynamical modelers would not agree with. For instance, it counteracts depletion by means of the “Total Factor Productivity” (TFP), a parameter cherished by economists as a measure of the effects of technological progress on the economy. Personally, I believe that the reliability of the TFP parameter is low and that technology advances by bumps, not smoothly. However, it is also true that technological progress can move forward gradually. The case of peak oil shows how even just incremental innovation, such as “horizontal drilling,” can reverse a declining trend of mineral extraction.
Does that mean that technological progress can always compensate for mineral depletion? I wrote an entire book on this subject (“Extracted,” 2014). The answer is, as usual, “it depends.” In another book (“After the Collapse” 2019), I summarized the strategy to counteract depletion in terms of 1) Use only what’s abundant, 2) Use as little as possible, and 3) Recycle ferociously. This recipe can be applied to the current predicament; for instance, “use only what’s abundant” means we need to abandon fossil fuels for solar energy or replace copper with aluminum. Abandoning the flawed idea that the economy can grow forever is an obvious application of the “use as little as possible” rule. Then, many of the problems that are discussed nowadays, e.g., lithium depletion in the new generation of automotive batteries, can be avoided by means of effective (even “ferocious”) recycling. That doesn’t mean there are easy solutions to the depletion problem: applying the three rules above is expensive and takes a certain degree of social cohesion that, at present, humankind doesn’t seem to have. But, at least in principle, there are ways to avoid collapse.
The same issue arises also in relation to climate change. A major tipping point, for instance, is the rapid melting of a significant fraction of Earth’s ice caps, which could lead to economic collapse as a result of rapidly rising sea levels. Such a collapse does not appear in the E4A scenarios, but, again, it is a different kettle of equations, as they say.
In short, my opinion is that the E4A study is an excellent model for the purposes its authors have chosen: choosing the right policies to move toward an uncertain future. The main risk it faces is the typical one of models: to be misunderstood in political or emotional terms. In particular, the risk E4A faces is to lead people to understand that we are facing no catastrophes in the near future since the model does not produce any. It is another form of the Oedipus mistake: models may be misunderstood as oracles for what they predict but also for what they do not predict. It is a mistake that we should try to avoid as much as possible, but that’s widespread everywhere, especially among policymakers.
In the end, the problem is that the future doesn’t exist. We cannot see it, we cannot test it, we cannot do experiments on it, and it is a delusion to assume that we can say something “scientific” about it. Facing the future, we see a blank slate through which we can only see shadows that sometimes we think we can recognize, but we need to be careful because we can make terrible mistakes. We need to be flexible and change our forecasts as the future becomes the present. It is the best we can do.
Note: this text is partly derived from a recent paper that the author published in “Our World of Futures Studies as a Mosaic” (2024, in press), edited by Tero Villman, Sirkka Heinonen, and Laura Pouru-Mikkola. I would also like to thank Dennis Meadows for his comments.
"_The problem [with Peak Oil Theory] was that the real world is subjected to many uncertainties that cannot be easily described by fixed parameters. New extraction technologies changed the situation_"
It appears to be more diabolical than that.
The problem isn't that "peak oil theory" was wrong; the problem is that they keep changing the rules, in order to placate investors and the masses.
Art Berman points out that "oil production" based on volume is a false measure. That we keep hitting "new peaks" of volume is a strong hint. In his recent interview with Nate Hagens, he states that a full 40% of what is included in "oil production" is, in fact, _not_ oil!
The current official stats include things like "refinery gain," "natural gas liquids," and even things that have never been in the Earth, like biodiesel and corn-ethanol!
The "refinery gain" magic alone should raise eyebrows. When fuel is refined, a technique called "cracking" is employed, by which you end up with more gasoline-grade fuel and less diesel-grade fuel. That's a simplification, but the end result is you wind up with _more_ volume of liquids that actually contains _less_ energy!
If the volumetric fibbing were not enough, the energy content situation is even worse. The "fracking revolution" has created a glut of lighter-grade hydrocarbons, which contain less energy than heavier grades. Some Permian Basin workers are putting what they pump directly into the gas tanks of their pickup trucks! It seems that a "barrel of oil" these days contains as much as 5% to 10% *less energy* than a barrel of oil contained before widespread fracking.
So, I don't think current production statistics show "peak oil" was wrong at all. On an energy basis (instead of a volume basis) we arguably hit "peak energy" at about 2015 or so, when fracking began in earnest. They simply changed the "what is oil" rules in order to make peak oil wrong.
This could explain why civilization is experiencing the expected catabolic effects (stagflation, reduced productivity, unemployment, etc.) of peak oil, even while the fancy charts and graphs "show" we have not reached peak oil.
But we have arguably passed peak fossil energy.
Nebulous gibberish on the part of fossil fuel entities is the very bottom of the morally bankrupt barrel.