Home Categories social psychology Out of Control: The New Biology of Machines, Society, and the Economy

Chapter 85 15.3 Blindness to Parallel Implementations

It's hard to tell John Holland's real age from appearances.He once played with the world's first computer and now teaches at the University of Michigan.He was the first to develop a mathematical method for describing the optimization capabilities of evolution that could be easily programmed on a computer.Holland called them genetic algorithms because their mathematical form is somewhat similar to genetic information. Unlike Tom Ray, Holland starts with sex.Holland's genetic algorithm selects two sets of computer codes similar to DNA, which have a good effect on problem solving, and then recombines them randomly by means of mating and swapping to see if the new codes can Act a little better.In designing the system, Holland, like Ray, had to overcome an unresolved problem: For any randomly generated computer program, it tends to be far from being either good or bad, and simply unreliable.In a statistical sense, random mutations to the available code are doomed to repeated defeats.

As early as the early 1960s, theoretical biologists discovered that mating produces a higher proportion of practical individuals than mutation, and thus computer evolution based on it is more stable and viable.However, the results of sexual mating alone are very limited.In the mid-1960s, Holland invented the genetic algorithm; the main function of genetic algorithm is mating, but mutation is also one of the masterminds behind it.By combining mating with mutation, the system becomes flexible and broad. Like other systems minds, Holland sees parallels in the work of nature and the tasks of computers. "Organic organisms are brilliant problem solvers," Holland wrote in a summary of his work, "and they exhibit a versatility that puts the best computer programs to shame." This assertion particularly strikes computer scientists Embarrassed.They may spend years racking their brains on an algorithm while organisms acquire their abilities through aimless evolution and natural selection.

The evolutionary approach, Holland writes, "removes one of the greatest hurdles in software design: pre-specifying all the characteristics of the problem." There are countless solutions, so evolution is the solution. Just as evolution requires a large number of individuals to be effective, genetic algorithms also produce a large number of code groups, and these codes process data and mutate at the same time.A genetic algorithm is actually a swarm of slightly different strategies trying to simultaneously climb different summits on rough terrain.Since a lot of the code works in parallel, it is able to access multiple areas of the terrain at the same time, making sure it doesn't miss the real peak.

Implicit parallelism is the magic that the evolutionary process ensures that it does not just climb to the top, but the top.How to find the global optimum?By examining every inch of the entire landscape simultaneously.How to optimally balance thousands of conflicting variables in a complex system?By trying thousands of combinations at the same time.How do you grow organisms that can survive harsh conditions?By throwing in a thousand slightly different individuals at the same time. In Holland's algorithm, those "top" codes mate with each other.In other words, in areas with "high terrain", the mating rate is high.This focuses the system's attention on the most promising areas, while depriving unpromising areas of the computation cycles they would take up.In this way, parallelism not only achieves "Skynet is complete, sparse but not missing", but also reduces the number of codes needed to find the peak.

Parallelism is one of the ways to get around the stupidity and blindness inherent in random mutation.This is the great irony of life: the repetition of blind acts one after another can only lead to a deeper absurdity, whereas blind acts performed in parallel by a group of individuals can, when the conditions are right, lead to all that we find interesting. John Holland invented genetic algorithms while studying adaptation mechanisms in the 1960s.And until the late 1980s, his work didn't attract any attention, except for a dozen whimsical computer science graduate students.Several other researchers, such as engineers Lawrence Fogel and Hans Bremmermann, independently studied the mechanistic evolution of populations in the 1960s; computer scientist Michael Conrad, who in the 1970s also moved from the study of adaptation to building computer models of population evolution; they both received the same snub.In short, this work is relatively unknown in the field of computer science, and it is even more unknown in the field of biology.

Before Holland's book on genetic algorithms and evolution, Adaptation in Natural and Artificial Systems, appeared in 1975, only two or three students had written papers on genetic algorithms.When it was republished in 1992, the book sold only 2,500 copies.From 1972 to 1982, there were no more than two dozen articles on genetic algorithms in the entire scientific community, let alone any followers who worshiped computer evolution. The lack of interest in the biological community is understandable (but not glamorous); biologists believed that the natural world was too complex to show the true full picture with the computers of the time.It's surprising that computer science has so little interest in it.While doing research for this book, I was often puzzled why such an important method as computational evolution was ignored?The source of this blindness, I now believe, lies in the seemingly haphazard parallelism inherent in evolution and its fundamental conflict with the prevailing computing dogma of the time—von Neumann serial programs.

The first electronic computer of mankind is called the electronic numerical integral calculator, which was developed in 1945 to solve the ballistic calculation problem of the US military.The electronic numerical integral calculator is a behemoth composed of 18,000 electron tubes, 70,000 resistors and 10,000 capacitors.It requires 6,000 manual switches to set the instructions, and then run the program; the calculation of each value is actually performed simultaneously in parallel.This is a burden for programming. The genius of von Neumann fundamentally changed this clumsy programming system.The successor to the Electronic Numerical Integrator, the Discrete Variable Automatic Computer, was the first general-purpose computer capable of running stored programs.At the age of 24 (1927) von Neumann published his first academic paper on mathematical logic systems and game theory, and he has been thinking about systems logic problems ever since.While working with the Discrete Variable Automatic Computers group, he devised a way to control the complex operations that computer programming requires to solve multiple problems.He suggested dividing the problem into discrete logical steps, similar to the solution steps of long division, and temporarily storing intermediate values ​​in the solution process in the computer.In this way, those intermediate values ​​can be considered as input values ​​for the next part of the problem.By performing calculations in such a co-evolving loop (now called a subroutine), and storing the program logic in the computer so that it can interact with the answer, von Neumann was able to translate any problem into a human mind a series of steps.He also invented a notation for describing this step-by-step circuit: the now-familiar flowchart.Von Neumann's architecture for serial computing—executing instructions one at a time—was astonishingly general and well-suited for human programming. In 1946, von Neumann published the outline of this architecture, and all subsequent commercial computers adopted this architecture, without exception.

In 1949, John Holland worked on the follow-up project of the discrete variable automatic electronic computer, "Project Tornado". In 1950, he participated in the logic design team of IBM's "National Defense Computer", which later evolved into the IBM701, the world's first commercial computer.Computers at the time were the size of rooms and consumed a staggering amount of power.By the mid-1950s, Holland joined a legendary circle of deep thinkers who began discussing the possibilities of artificial intelligence. While academic titans like Herbert Simon and Alan Newell saw learning as a noble and high-level achievement, Holland saw it as a low-end adaptation beneath a glamorous exterior.Holland argues that if we can understand adaptation, especially evolutionary adaptation, it might be possible to understand and even mimic conscious learning.While it is possible that others are aware of the parallels between evolution and learning, in a rapidly evolving field, evolution has not received much attention.

In 1953, when Holland was browsing aimlessly in the mathematics library of the University of Michigan, he stumbled across a volume of "The Genetic Theory of Natural Selection" written by RA Fisher in 1929, and was immediately inspired.Darwin led the shift from the study of individuals to the study of populations, and it was Fisher who transformed population thinking into quantitative science.Fisher takes a butterfly population that evolves over time as an overall system that transmits differential information across the population in parallel.He came up with equations that govern the diffusion of information.By harnessing nature's most powerful force, evolution, and man's most powerful tool, mathematics, Fisher single-handedly created a new world of human knowledge. “For the first time, I realized that I could do meaningful math on evolution,” Holland said, recalling the odd encounter. “That idea was very appealing to me.” Holland was so enamored with the treatment of evolution as a form of mathematics that he (before photocopiers) was desperately trying to get hold of the out-of-print full text.He pleaded unsuccessfully with the library to sell him the book.Holland took Fisher's insight and turned it into his own: a swarm of coprocessors dancing like butterflies across the fields of computer memory.

Holland argues that artificial learning is essentially a special case of adaptation.He's pretty sure that adaptability can be achieved on a computer.After grasping Fisher's insight that evolution was a matter of probability, Holland set about trying to code evolution into a machine. At the beginning of his attempt, he faced a dilemma: evolution was a parallel processor, but all available electronic computers were von Neumann-style serial processors. Desperate to turn the computer into a platform for evolution, Holland made the only logical decision: design a massively parallel computer to run his experiments on.In parallel computing, many instructions are executed simultaneously rather than one instruction at a time. In 1959, he submitted a paper that, as its title sums it up, described a "universal computer capable of executing any number of subroutines simultaneously," a contraption that came to be known as the "Holland machine."And it took almost thirty years for such a computer to finally arrive.

During this time, Holland and other computational evolutionists had to rely on serial computers to foster evolution.They pulled out all the stops to program a slow parallel process on a fast serial processor.The simulations worked well enough to reveal the power of truly parallel processes.
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book