Home Categories social psychology Out of Control: The New Biology of Machines, Society, and the Economy

Chapter 139 22.9 Problems with global models

In the 1970s, after thousands of years of telling tales about the Earth's past and about everything in the universe, the inhabitants of planet Earth began to tell the first stories about possible futures.The high-speed communication at that time showed them a comprehensive real-time view of their homeland for the first time.The imagery from space is captivating - a cerulean orb of steaming clouds hangs gracefully against a black vista.And what's going on on the ground isn't so cute.The reports sent back from every quadrant of the earth are saying that the earth is disintegrating.

Tiny cameras in space have brought back panoramas of Earth that are stunning, in an old-fashioned way: exhilarating and terrifying.These cameras, along with the vast amount of ground-based data poured out by each country, form a distributed mirror that reflects a picture of the entire Earth system.The entire biosphere is becoming more and more transparent.The Earth system sets out to predict the future—as all systems do—in the hope of knowing what might happen next (say, in the next twenty years). From the data collected by the outer membrane of the globe, we get the first impression - our planet is injured.No static world map can verify (or refute) this picture.Nor does a globe chart the rise and fall of pollution and population over time, or decipher the interrelated effects of one factor on another.There is no film from space that can explain this question, what will happen if it continues?We need a global forecasting device, a data table for global what-if analysis.

In a computer lab at the Massachusetts Institute of Technology, a humble engineer cobbled together the first global spreadsheet.Jay Forrester dabbled in feedback loops starting in 1939, improving the servo mechanism of the steering gear.Together with his MIT colleague Norbert Wiener, Forrest followed the logical path of the servomechanism to the birth of the computer.While helping to invent the digital computer, he also applied the first computing-capable machines to areas outside of typical engineering ideas.He built various computer models that aided the company's management and manufacturing processes.The validity of these corporate models inspired Forrester with new ideas.With the help of a former Boston mayor, he built a city model that simulated the entire city.He intuitively realized, quite rightly, that cascading feedback loops—impossible to trace with pen and paper, but easily traced by computers—were the only way to get close to the web of interactions among wealth, population, and resources. way.So why not simulate the whole world?

Sitting on a plane returning from Switzerland in 1970 after attending a conference on the "human condition," Forrest began sketching out the first formula, one that would lead to a model of what he called "the dynamics of the world." Not to mention rough, but a sketch.Forrester's rough model reveals clear circuits and forces that he intuits govern large economies.As for the data, as long as it is ready-made, he grabs it and uses it for quick estimation.The Club of Rome, the group that sponsored that meeting, came to MIT to evaluate the prototype Forrester had thrown together.They are encouraged by what they see in front of them.So they raised money from the Volkswagen Foundation and hired Forrest's partner, Dennis Meadows, to take the next step on the model and continue to perfect it.For the rest of 1970, Forrester and Meadows worked together to improve the "world dynamics" model, design more sophisticated process loops, and scour the world for recent data.

Dennis Meadows, his wife Dana, and two other co-authors have published an enhanced model filled with real data called Limits to Growth.As the first global spreadsheet, the simulation was a huge success.For the first time in history, the entire Earth's life system, Earth's resources, and human culture have been extracted to form a simulation system and allowed to roam into the future. The "Limits to Growth" simulation system was also very successful as a global siren.Its authors remind the world with the conclusion that every expansion of human beings' existing paths will almost lead to the collapse of civilization.

In the years since the results of the Limits to Growth model were published, thousands of editorials, policy debates and newspaper articles have been sparked around the world.A headline exclaimed: "Computers predict the future chillingly".The key findings of the model are: "If current trends in population, industrialization, pollution, food production, and resource consumption hold constant, the planet will be at some point within the next 100 years reached its limit of growth." The modelers ran hundreds of simulations with hundreds of slightly different scenarios.But no matter how they weigh the trade-offs, almost all the simulations predict that populations and living standards will either shrink gradually or expand rapidly and then implode.

The model is highly controversial and has received a great deal of attention, largely because of its remarkably clear and annoying policy implications.However, it permanently elevates the discussion of resources and human activities to the necessary global scope. The "limits to growth" model has not succeeded in spawning other better predictive models, which is exactly what its authors hope to do.Instead, the world model was viewed with suspicion during the intervening 20 years, largely because of the "limits to growth" controversies.Ironically, today (twenty years later) the only model of the world that the public sees remains the "Limit to Growth".On the 20th anniversary of the model's release, the authors re-released the model with only minor changes.

The reissued "Limits to Growth" model runs on a software program called Stella. Stella took the dynamic systems approach developed by Jay Forrester on mainframe computers and ported it to the Macintosh's visual interface. The "Limits to Growth" model is an impressive web of various "stocks" and "flows".Stocks (money, oil, food, capital, etc.) flow into certain nodes (representing general processes such as farming), where they trigger the outflow of other stocks.For example, after money, land, fertilizer, and labor flow into farms, unprocessed food flows out.And food, oil, and some other stocks flow into factories that produce fertilizer, completing a feedback loop.A spaghetti maze of loops, sub-loops, and cross-loops completes the world.The influence of each loop on the other loops is adjustable and depends on real-world data ratios.For example, per kilogram of fertilizer and per kilogram of water, how much food can be produced in one hectare of field, and how much pollution and waste will be produced.Indeed, in all complex systems the effect of a single adjustment cannot be estimated in advance; it must be shown throughout the system before it can be measured.

A living system must anticipate in order to survive.However, the complexity of the prediction mechanism must not overwhelm the living system itself.We can examine the "limits to growth" model in detail as an example of the inherent difficulties of predictive mechanisms.There are four reasons to choose this particular model.First, its republishing calls for it to be (re)conceived as a predictive device on which human predictive endeavors can rely.Second, this model provides a convenient two-year period for evaluation.Do the patterns it detected two decades ago still prevail?Third, one of the strengths of the "limits to growth" model is that it is commentable.It produces quantifiable results, not vague descriptions.That is, it is testable.Fourth, modeling the future of human life on Earth is the most ambitious of goals.Whether it succeeds or fails, such a brilliant attempt will teach us how to use models to predict extremely complex adaptive systems.One really has to ask oneself: Is there any confidence in simulating or predicting such a seemingly utterly unpredictable course as the world?Can feedback-driven models be reliable predictors of complex phenomena?

There is much to criticize about the "limits to growth" model.Among them: it's not terribly complex; it's full of feedback loops; it rehearses scenarios.However, I also found the following weaknesses from the model: Limited overall scenarios. "Limits to Growth" is not so much an exploration of possible futures for the kinds of diversity that actually exist, as it is a large number of small variations on a rather limited set of assumptions.Most of the "possible futures" it explores seem to make sense only to its authors.When the models were built two decades ago, the authors felt that the depletion of finite resources was a reasonable assumption, and they ignored scenarios that did not build on that assumption.However, resources (such as rare metals, oil, or fertilizers) have not decreased.Any true predictive model must have the ability to generate "unimaginable" scenarios.It is important that a system has enough leeway in the space of possibilities to wander into places we don't expect.It is an art because the model has too many degrees of freedom and it becomes unmanageable, and if it is restrained too tightly, it becomes unreliable.

Wrong assumption.Even the best models can be led astray by false premises.As far as "limits to growth" is concerned, one of its key original assumptions is that the world can only accommodate 250 years of non-renewable resources, and the demand for such resources is developing rapidly.Twenty years later, we have learned that both assumptions were wrong.Reserves of oil and minerals increased without their prices increasing; meanwhile, demand for certain raw materials, such as copper, did not increase exponentially. When the model was republished in 1992, the authors revised these assumptions.The underlying assumption now is that pollution necessarily increases with development.If the past two decades are used as a guide, I can imagine that such an assumption will also need to be revised in the next two decades.This fundamental "adjustment" has to be made because the "limits to growth" model requires it... No room is left for learning.One group of early critics joked that their "limits to growth" model for the period 1800-1900 found "a 20-foot layer of horse manure on the street."Because the society at that time, the proportion of using horses for transportation was increasing, so this is a logical extrapolation.The half-joking critics argue that the "limits to growth" model does not provide the rules for technological learning, efficiency gains, and human behavioral self-discipline and ability to reform and invent. A certain type of adaptation is wired into this model.When a crisis occurs (such as an increase in pollution), capital assets will be transferred to deal with the crisis (so the generation coefficient of pollution is reduced).However, this kind of learning is neither decentralized nor open-ended.In fact, neither type of modeling is easy.Much of the research mentioned elsewhere in this book is a pioneering effort to achieve distributed learning and open-ended growth in man-made or natural environments.And without this kind of decentralized, open-ended learning, it won't be long before the real world can outperform the model. In reality, the populations of India, Africa, China, and South America did not change their behavior in accordance with the hypothetical projections of the "limits to growth" model.And they adapt because of their own immediate learning cycle.For example, the decline in the global birth rate has been faster than anyone predicted, catching the Limits to Growth model (like most other forecasts) off guard.Is this due to the influence of doomsday prophecies like "The Limits to Growth"?A more plausible mechanism is that educated women have fewer children and are better off, and people will emulate those who are better off.And they don't know and don't care about the limits of global growth.Various government incentives have promoted the development of these already existing local dynamics.Men everywhere act and learn in their own immediate interest.This also applies to other aspects of functionality, such as crop productivity, arable land, transportation, and so on.In the "limits to growth" model, these assumptions of fluctuating values ​​are fixed, but in real life, these assumptions themselves have co-evolutionary mechanisms that change over time.The point is that learning must be modeled as an intrinsic circuit.Beyond these values, the exact configuration assumed in a simulation—or in any simulation that attempts to predict a living system—must be quite adaptable. World leveling. The "Limits to Growth" model treats pollution, population composition, and resource possession in the world as uniform.This homogenization simplifies the world enough to safely model it.However, since the Earth's locality and regionalization are its most prominent and important properties, the result of doing so ultimately defeats the purpose of the model.Also, some important phenomena of the earth are formed by the dynamic levels originating from the different local dynamics.The people who modeled Limits to Growth realized the power of secondary loops—indeed, this is Forrester's main virtue of the system dynamics underpinning the software.However, this model completely ignores a secondary circuit that is extremely important to the world: geography.A global model without geography….Not this world at all.Not only must the learning be distributed throughout the simulation, but all functions must be distributed as well.The biggest failure of this model is that it does not reflect the distributed nature of life on Earth—the cluster nature. The growth of any terminal opening cannot be imitated.I once asked Dana Meadows what they got when they ran the model with 1600, or even 1800, and she said they never ran the model that way.I was very surprised at the time, because backtracking is actually the standard method for practical testing of various predictive models. The creators of the "Limits to Growth" model suspect that if such simulations were run, the model would produce results that did not match the facts.This should serve as an alarm.Since 1600, the world has entered a long period of growth.And if a world model is to be believed, it should be able to simulate growth over four centuries—at least as history.After all, if we are to believe that the "limits to growth" model has anything to say about future growth, then the simulation must, at least in principle, be able to generate long-term increase.As it stands now, the best that Limits to Growth can prove is to simulate a century of collapse. "Our model is incredibly 'robust,'" Meadows told me, "and you have to do everything you can to keep it from crashing. . Predicting the future of society is quite dangerous.All the initial parameters of the system quickly converge toward the end, but history tells us that human society is a system that shows extraordinary continuous expansion. Two years ago, I spent an evening chatting with Ken Karacodisius.He is a programmer who is building a miniature world of ecology and evolution.The miniature world (which eventually became the game SimLife) provided players who took on the role of gods with the tools to create 32 virtual animals and 32 virtual plants.These virtual animals and plants interact, compete with each other, prey on each other, and then evolve. "How long did you run your world the longest?" I asked him. "Alas," he moaned, "only for one day. You know, it's really hard to keep a complex world like this going. They do like to crash." The scenarios in "Limits to Growth" collapse because the simulation model "Limits to Growth" is good at collapsing.Almost every initial condition in this model leads either to catastrophe or to some (rarely) steady state—but never to any new structure—because the model is inherently incapable of generating some Terminal opening growth. "Limits to Growth" is not capable of simulating the natural progression from the agrarian age to the industrial society.Meadows concedes, "It's also unlikely to take the world from the Industrial Revolution to any kind of subsequent stage beyond the Industrial Revolution." She explained: "What the model shows is that the Industrial Revolution Logic has hit an inevitable wall of limitation. The model has two things to do, it either starts to break down, or it's up to us as model builders to intervene and make changes to save it." Me: "Can't we create a better world model that has its own transformation ability and can automatically transform to another level?" Dana Meadows: "When I think about this kind of ending, the system is designed to make it happen, and we just lean back and watch from the sidelines, it feels a little bit fatalistic. But instead, we're modeling When we actually put ourselves in it. Human intelligence enters into this model to perceive the whole situation and then make changes in the human social structure. This reflects how the system that emerges in our brain sublimates The next stage of the picture -- using intelligence to step in and rebuild the system." This is the model to save the world, but it does not adequately model how an increasingly complex world works.Meadows is right, taking a path that uses intelligence to intervene in its culture and change its structure.However, this work is not just done by the model builders, nor does it just happen at the beginning of the culture.The rebuilding of this structure happens in six billion brains around the world, every day, every age.If there is indeed a decentralized evolutionary system, human culture is such a system.Any predictive model that fails to accommodate this daily distributed mini-evolution in billions of minds is doomed to collapse, and without such evolution, culture itself. Twenty years later, the "Limits to Growth" simulation model needs more than a refresh, it needs a complete redo.The best way to use it is to see it as a challenge, a new starting point for building better models.A predictive model of a real global society should satisfy the following conditions: Ability to run a wide variety of scenarios in large numbers, Starting with some more flexible and well-founded assumptions, implement distributed learning, Including local and regional differences, Show growing complexity if possible. The reason I don't focus on the "limits to growth" world model is because I want to criticize its powerful political content (after all, its first version inspired a generation of anti-growth activists).To be precise, the various inadequacies of this model correspond exactly to several core arguments I want to make in this book.In order to "feedforward" a certain scenario of the system into the future, Forrester and Meadows made a valiant attempt to simulate an extremely complex and adaptive system (the basic structure of humans living on Earth).What this Forrest/Meadows model highlights are not limits to growth, but limits to certain simulations. Meadows' dream was also Forrest's dream, the dream of the war gamers at U.S. Central Command, the dream of Farmer and his forecasting company, and my dream.And that dream: to create a system.The system needs to reflect the real, evolving world sufficiently to allow the miniature model to project its results into the future at a faster rate than the real world.We want to predict the mechanism, not out of the sense of mission to predict the fate, but to obtain guidance.Conceptually, only the machines of Kaufmann or von Neumann can create more complex things by themselves. In order to do this, the model must have the "necessary complexity".The term was coined in the 1950s by cybernetician Ross Ashbee, who made some of the first electronically adaptive models.Every model must extract countless details of reality bit by bit, and gather them together for compressed imaging; one of the most important qualities it must condense is the complexity of reality.Ashby summed up his experiments with miniature models made of vacuum tubes and concluded that if a model oversimplifies complex phenomena too eagerly, it misses the mark.The complexity of a simulation must not exceed the complex field of activity it is simulating, or the model cannot keep up with the tortuous course of what it is simulating.Another expert in cybernetics, Gerald Weinberg, provided a very apt metaphor for this "necessary complexity" in his book "On the Design of Stable Systems".Imagine, Weinberger suggested, a guided missile aimed at an enemy aircraft.A missile itself does not have to be an aircraft, but it must have a flight complexity comparable to that of an aircraft's flight behavior.If the missile doesn't have at least as much speed as the target aircraft and is not as aerodynamically agile as the target enemy aircraft, then it will definitely miss.
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book