Home Categories Science learning complex

Chapter 25 building bricks

complex 米歇尔·沃尔德罗普 12143Words 2018-03-20
building bricks In retrospect, Holland said, Heber's theory and his own neural network simulations based on it had the biggest influence on him, forming his thinking for the next three decades, rather than using it in a single aspect. He benefits.But at that time, the most direct result was that he left IBM. The problem is that computer simulations, especially the 701 computer, have some well-documented limitations.The collection of cells of the real nervous system has 10,000 neurons distributed over most of the brain, and each neuron has 10,000 synapses.But the largest simulated neural network that Holland and his colleagues ran on the 701 computer could only have a thousand neurons, and each neuron had only sixteen junctions, or they tried to speed up everything they could. It takes programming skills to achieve this speed.Holland said: "The more I go down, the more I feel that the distance between what we can actually conduct experiments and the results I want to see is too great."

The only option is to use mathematical methods to analyze neural networks. "But it's just too hard to do that." Every attempt he made hits the south wall.The math he learned at MIT just wasn't enough to fully deploy a Heberian network.And he took more math courses than most physics graduates. “It seemed to me at the time that the key to understanding more about neural networks was to have a better grasp of the mathematical tools,” he said.So in the fall of 1952, with IBM's blessing and a promise to continue a hundred hours of consulting work on IBM's grand vision, he came to Ann Arbor to begin his Ph.D. in mathematics at the University of Michigan.

He was the lucky one again.Of course, Michigan isn't a bad choice under any circumstances.Not only because the math department there was one of the best in the country at the time, but also because Holland had another major consideration: There was a football team there. "Playing a football game against the top ten at the weekend, with 100,000 spectators flocking to the city to watch the game, I still feel fond of it." But for Holland, the real stroke of luck came when he met Arthur Burks, an extraordinary philosopher, at the University of Michigan.Birx was a specialist in the pragmatic philosophy of Charles Peirce, earning his Ph.D. in 1941.Unable to find a faculty position in his subject area at the time, he took another 10-week course at Penn's Moore College the following year to become a war engineer.It turned out to be a great choice later on. In 1943, shortly after he graduated, he was employed by Moore College to work on ENIAC, the first electronic computer that was classified as top secret.There he met the legendary figure, the Hungarian mathematician Johann von Neumann.Von Neumann was a consultant at the time, coming here often from the Institute for Advanced Study in Princeton to work on the project.Birx also participated in the development of ENIAC's next-generation computer, EDVAC, under the guidance of von Neumann.It was the first computer to store information electronically, using programs.Indeed, the 1946 paper by von Neumann, Birx, and mathematician Herman Goldstine, "A Preliminary Inquiry into the Logical Design of Electronic Computing Instruments," is still considered the foundation of modern computer science today. cornerstone.In this paper, the three authors formulate the concept of programming in a precise logical form, while also describing how a general-purpose computer can be programmed by retrieving instructions from the computer's memory system and storing the results in the memory system. A way to execute a program in an endless loop.This "von Neumann architectural design" is still the basis for nearly all computers today.

When Holland met Birx at the University of Michigan in the mid-1950s, Birx was a well-proportioned, well-mannered man who resembled Holland's vision of a missionary (hitherto, Birx had never Tieless and coatless on the campus of the notoriously indifferent University of Michigan).But Birx was also a warm and friendly mentor, and he soon brought Holland into his computer logic design group, the circle of theorists who worked on computer language research and switching networks. Theoretical proof, in short, is trying to grasp the new machine of computer from the strictest and most fundamental level.

Box also invited Holland to join a new Ph.D. program.It is a project dedicated to exploring the meaning of computing and information processing in the broadest possible field, and Birx himself is helping to organize it.This program, which soon became known as Communication Science, finally developed into a full computer department called Computer and Communication Science in 1967.But at the time, Birx felt he was simply filling in the blanks for von Neumann, who died of cancer in 1954. "Von Neumann wanted to apply computers in two ways," he said.On the one hand is the computer design of general functions, which they have invented. "On the other hand, it is based on automata theory. Computers based on knowledge of nature and artificial intelligence." Box also felt that developing such a program would meet the needs of these students, and Holland was one of the outstanding ones. The mind refuses to go with the flow.

Holland liked what he heard. “That means taking very tough courses like biology, linguistics, psychology and regular courses like information theory. These courses are taught by professors from that subject area so that students can apply all Connecting knowledge with their computer models. By taking these courses, students will have a very deep understanding of the fundamentals of this field—its difficulties and problems, why these problems are so difficult to solve, and the role that computers can play in solving these problems. What it does, etc. They don't have a superficial understanding of things."

Holland liked the idea even more because he had completely lost interest in mathematics.The UM Department of Mathematics, like all mathematical organizations after World War II, was dominated by the French Bourbaki school, which preached the inhuman purity and abstraction of mathematics.Even stating the concepts behind your principles, explaining your theorems in terms of real schemas, would be considered vulgar by Bobako's standards."The idea of ​​this school is to let people know that mathematics can be done without any explanation," says Holland. But that's not at all what Holland came here for his Ph.D.He hopes to use mathematics to understand the world.

So when Birx suggested that Holland transfer to the communications science research program, he agreed without hesitation.He abandoned his nearly finished doctoral thesis in mathematics and started all over again. "This means that I can do my doctoral thesis in a field that is very close to the research I want to do," he said.The field, roughly speaking, is neural networks (ironically, the title of his Ph.D. dissertation he eventually settled on, "Cycles in Logic Nets," was an analysis of what goes on inside a network switch. In this paper, he proved many of the same theorems that a young Berkeley medical student named Stuart Kaufman tried to prove four years later on his own).Holland received his Ph.D. in 1959, the first doctorate awarded by the communications science program.

None of this changed Holland's focus on the broader issues that brought Holland to Michigan.On the contrary, Birx's communications science program provided an environment in which such problems could breed.What is emergence?What is thinking?How does thought proceed?What are the laws of thought?What exactly does it mean for a system to adapt?Holland jotted down some of his reflections on these issues and then systematically categorized them as Glasperlenspiel No. 1, Glasperlenspiel No. 2, and so on. Glas what? "Das Glasperlenspiel", Herman Hesse's last novel, was published in 1943 while the author was in exile in Switzerland.One day Holland found this book in a pile of books borrowed from the library in the same room.In German, the written meaning of the title is "Glass Bead Game", but in English translations the book is often referred to as "Game Master", which means the same in the Italian translation.Set in the far future, the novel describes a game that was originally played by musicians.This game is to first set a main melody on the glass bead abacus, and then arrange all the polyphonic parts and variations of the main melody by flicking the glass beads back and forth.Over time, the game evolved from an initially simple melody to an extremely complex instrument controlled by a powerful group of priestly intellectuals. "The best thing is that you can get a combination of themes," Holland said. "A little bit of astrology, a little bit of Chinese history, a little math. And then trying to develop that into a musical theme."

Of course, he said, Hesse did not make it very clear how exactly this was done.But Holland doesn't mind that.The glass bead game captured his heart more than anything he had ever seen or heard, fascinating him as much as chess, science, computers, and the brain.To put it figuratively, this game is exactly his lifelong pursuit: "I just hope to capture the main theme of everything in the world, and then knead them together to see what happens to them." He said. A particularly rich source of ideas stored in the Glasperlenspiel archives is another book.One day when Holland was browsing the books in the library of the Department of Mathematics, he discovered the landmark masterpiece "The Gene Theory of Natural Selection" published by R.A. Fisher in 1929.

Dutch Degen was fascinated at first. "I've always loved reading about genes and evolution since middle school," he said.He admires the idea that each generation recombines the genes inherited from their parents.You can calculate how often traits like blue eyes, black hair appear in the next generation. "I always thought, wow, this calculation is really neat. But after reading Fisher's book, I realized for the first time that in this field, you can try other things besides ordinary algebra." Indeed , Fisher used many more complex concepts, from differentiation and integration to probability theory.His book does a really rigorous mathematical analysis of how natural selection has changed the distribution of genes.For biologists, such a book is a first.This also laid the foundation stone for the contemporary "neo-Darwinian" theory of evolution.Twenty-five years later, the theory still represents the highest standard of evolutionary dynamics theory. So Holland finished the book in one sitting. "I was able to apply the integrals, differential equations, and other methods I learned in math class to this revolution in kinetic genetics. It was an eye-opening book. I knew it as soon as I read it , I will not let go of the ideas in this book. I know I have to do something with the ideas in this book, and I keep running them in my head and taking notes.” But although Holland admired Fischer's mathematics very much, there was a certain way in which Fisher used mathematics that puzzled him.And the more he thought about it, the more confused he became. First, Fisher's entire analysis of natural selection focused on the evolution of one gene at a time, as if the contribution of each individual gene to the survival of an organism could exist entirely independently of the other genes.Roughly speaking, Fisher assumed that the action of genes is perfectly linear. "I knew it was wrong," Holland said.For green eyes, there are not dozens, or hundreds, of genes that make up the specific structure of green eyes, and a single green eye gene is insignificant.Holland realized that each gene must function as a part.Any theory that does not include this fact is missing a crucial part of the story of evolution.The reflection on this question is exactly what Heber has always emphasized in his research on the spiritual field.In terms of the most basic unit of thought, Heber's collection of cells is a bit like a gene.A tone of voice, a beam of light, a twitch of a muscle, the only way all of these can make sense is by combining each other into larger concepts and more complex behaviors. Plus, Fisher kept talking about evolution reaching a stable equilibrium, which also puzzled Holland.In this stable equilibrium state, the size of the species is idealized, the sharpness of the teeth is idealized, and the ability to survive and reproduce is also idealized.Fisher's point is basically the same as the economist's definition of economic equilibrium: he said that when a species' condition is optimal, any change will reduce this optimal state.So natural selection can't put further pressure on change. "Much of Fisher's theory emphasizes the idea: 'Well, the system will go to a Hardy-Weinberg equilibrium due to the following process...' But Doesn't sound like evolution to me." He reread Darwin and Herb again.No, Fisher's concept of equilibrium has nothing to do with evolution.Fisher seems to be talking about the realization of some primordial and eternal perfection. "But in Darwin, things get wider and more diverse over time. But Fisher's math doesn't touch that. Whereas Herb was talking about learning, not evolution, the reasoning is Likewise: the human mind grows richer, more ingenious, and more astonishing as it absorbs experience from without." To Holland, evolution and learning seem very similar to games.In both cases, he argues, there is an actor who is fighting against his environment to obtain sufficient conditions for his continued development.In evolution, the reward is survival, a chance for the actor to pass on his genes to the next generation.In learning, the reward is some kind of reward, such as food, pleasurable feelings, or emotional gratification.In both cases, what is gained (or lacked) is a kind of feedback given to the agents to help them improve their self-expression: if the agents want to acquire the ability to "adapt" themselves, they have to take steps to obtain Strategies that pay handsomely and forego the others. Holland couldn't help thinking of Samuel's checkers program, which took advantage of this feedback: It could change tactics frequently as it learned from experience and learned more about its opponent.But now Holland is starting to see how prescient Samuel was in focusing on the game.This similarity of play seems to explain any adaptive system.In economics, the payoff is money, in politics, the payoff is votes, and so on.In a way, all of these adaptive systems are fundamentally the same, which in turn means that all of these systems are fundamentally like playing checkers or chess: the space of possibilities is unimaginably large.An agent continuously improves its chess-playing technique, which is adaptation.But if you want to find the optimal and stable equilibrium point of this game, just like playing chess, you simply cannot exhaust its infinite possibilities. Unsurprisingly, for Hollander, Equilibrium wasn't evolution, or even the kind of war game their three fourteen-year-old boys played in their basement.Equilibrium means ending.But for Holland, the essence of evolution is the journey, the wonder that unfolds endlessly. “It became clearer and clearer what I wanted to know, what I was curious about, what I was excited about discovering. Equilibrium was not part of it.” Holland put these thoughts aside for a while as he worked on his doctoral dissertation.But as soon as he graduated in 1959—by which time Birx had invited him to stay on as a postdoctoral fellow in the Computer Logic Group—he decided to turn his ideas into a complete and rigorous theory of adaptation.He said: "I believe that if I look at the genetic adaptation as the longest-term adaptation and the nervous system as the shortest-term adaptation, then the overall theoretical framework between the two will be Same.” In order to set out these preliminary thoughts in his mind, he even wrote a manifesto on this research topic, his forty-eight-page technical report published in July 1961 was entitled: “Adaptation An Informal Description of the Logical Theory of Sexual Systems. He found many frowns in the computer logic group.But it wasn't hostility, but some thought his general theory of adaptation sounded too outlandish.Couldn't Holland have spent his time on more fruitful research? “But the question is, was it a weird idea?” Holland recalls the incident with cheerful admission that, had he been in his colleague’s shoes, he would have been skeptical, too. "The research I was doing was not in a well-established and well-known discipline. It wasn't hardware, it wasn't software. And it certainly wasn't artificial intelligence at the time. So you couldn't do anything about it by any normal standard. judge." Box didn't need his persuasion. "I support Holland," Birx said. "There are logicians out there who don't think Holland's work falls under the umbrella of 'computer logic.' They're more traditional, but I tell them that's what we need to do, and it's important to get funding for this project. Equality with sex and other projects.” It turned out that Birx won: as the founder and leader of this project, his words carried considerable weight.Gradually, skepticism about Holland's research disappeared. In 1964, under the strong recommendation of Boxers, Holland won the tenure-track teaching position."Those years, I relied heavily on Birx to shield me," he said. Indeed, the security that Box's support gave Holland allowed him to pursue the results of adaptive theory.By 1962, he had dropped all his other research projects and devoted himself essentially fully to the study of adaptive theory.In particular, he was determined to solve the difficult problem of selection based on polygenes - not only because Fisher's assumptions about single genes in the book most puzzled him, but also because the study of polygenes is also out of equilibrium. The key to confusion. In fairness to Fisher, Holland says, the concept of equilibrium makes sense in terms of each individual gene.For example, suppose a species has a thousand genes, roughly as complex as a seaweed.To keep things simple, assume that each gene contains only two messages, green or brown, wrinkled or smooth, and so on.How many trials did it take for natural selection to discover the combination of genes that made the seaweed the strongest? If you assume that all genes are independent of each other, you only need two choices to determine which gene is more informative, Holland said.That would require two tries each on a thousand genes, for a total of two thousand, which isn't too many.In fact, this number is relatively small, and if it were, the algae would soon reach its most robust state, and the species would indeed reach an evolutionary equilibrium. But when we assume that genes are not independent of each other, let's see what happens to algae with a thousand genes.If it is to achieve the strongest state, natural selection will test every possible combination of genes.Because each gene combination has its own different robustness.When you count the total number of combinations of genes, you don't multiply two by a thousand, you multiply two by itself a thousand times, which is two to the thousandth power, or about ten to the three hundredth power—a number that is as large as Make the number of moves in checkers seem insignificant."It's not even possible for evolution to try that many times," Holland said. "And no matter how advanced our computers are, we can't do it." Indeed, even if all the elementary particles in the observable universe were transformed into Supercomputers have been calculating continuously since the big bang, and they are far from being able to complete the calculation.Also, it must be remembered that this is only in terms of seaweed.Humans and other mammals have about a hundred times as many genes as algae, and most genes contain more than two pieces of information. So again the situation arises: this is a system exploring towards an endless space of possibilities, with no realistic hope of finding the "sweet spot" for even one gene.What evolution can achieve is continuous improvement, not perfection.But of course this is the very question he was determined to find an answer to in 1962.But how to find the answer?The problem of understanding the evolution of multiple genes is clearly more than a simple matter of substituting Fisher's single-variable equations with multivariate equations.What Holland wanted to know was how evolution could find useful combinations of genes in an exploration of endless possibilities without scouring the entire field. At the time, a similar "possibilities explosion" concept was already familiar to mainstream AI researchers.For example, at the Carnegie Institute of Technology in Pittsburgh (now Carnegie Mellon University), Allem Newell and Herbert Simon have been working on A landmark study, namely, the study of how humans solve problems.Newell and Simon asked the subjects to guess various riddles and play various games, including playing chess, and asked the subjects to state their thoughts during the process.In this way, they found that human problem solving always involves a step-by-step mental search of a vast "problem space" of possibilities, with each step guided by practical experience: "If this is the case, then that's the way to go." steps.” Newell and Simon showed that the “problem-space” perspective can brilliantly reflect Human reasoning styles.Indeed, their concept of empirical retrieval has long been a golden rule in the field of artificial intelligence.General problem solving remains to this day one of the most influential procedures in the history of the nascent artificial intelligence development. But Holland remains skeptical.This is not because he thinks there is anything wrong with Newell and Simon's notions of problem space and empirical orientation.In fact, shortly after he obtained his Ph.D., he specially invited the two of them to teach the main course of artificial intelligence at the University of Michigan.From then on he and Newell became friends and intellectual partners.But Newell-Simon's theory could not help him in the study of biological evolution.There is no empirical basis, no guidance, in the whole concept of evolution.Generations of species have explored the space of possibilities through mutation and random recombination of sex genes, in short, by trial and error.Moreover, this generation of species does not search for the possibility of genetic combination step by step, but in parallel: each member of the group has a slightly different genetic combination, and the searched space Also slightly different.But despite these differences, and despite a longer evolutionary period, it produces ideas and wonders that are just as cerebral.To Holland Deben, this means that the true unifying laws of adaptation lie at a deeper level.But where is it hiding? At first, he had only an intuition that certain groups of genes interacted well with each other to form a unified, self-reinforcing whole.For example, groups of genes that tell cells how to draw energy from glucose molecules, or groups of genes that control cell division, or groups of genes that instruct cells how to combine with other cells to form a certain biological organization.Holland, too, could see some parallels in Heber's theory of the brain.In this theory, a collection of cells that resonate with each other can form a unified concept, such as "car," or a coordinated action such as raising an arm. But the more Holland contemplates the idea of ​​a unified, self-reinforcing gene group, the more nuanced the whole thing becomes.First, there are similar examples everywhere, such as subroutines in computer programs, departments in bureaucracies.And the chess method in the chess game.And, examples of this exist at every level of the organization.If a gene group has enough uniformity and stability, then this gene group can often serve as a building block of a larger gene group.The combination of cells forms a physiological tissue, the combination of physiological tissues forms an organ, the combination of organs forms an organism, the combination of organisms forms an ecosystem, and so on.Indeed, Holland thought, that's what "emergence" is all about: building bricks at one level combine to form building bricks at a higher level.This seems to be one of the most fundamental laws of the world.This law is of course also present in all complex adaptive systems. But why is this the case?This hierarchy of things is distinct.The properties of building brick structures are as commonplace as air.It is blinded by our omnipresence.But when you think about it, it's in dire need of explanation: Why is the world structured this way? In fact, there are many explanations for this.Computer programmers break down problems into more programs because smaller, simpler problems are easier to solve than larger, more complex ones.This is the old law of divide and conquer.Behemoths like whales and sequoias are made of countless tiny cells, because there are always cells before a behemoth can be formed.When gigantic plants and animals began to appear on the earth 570,000,000,000 years ago, natural selection was clearly preferable to the process of dividing the existing single It is much easier for cells to form organisms.GM divides itself into countless divisions and subdivisions because GM executives don't want all of the company's half a million employees reporting directly to him.He doesn't have so much time in a day.In fact, in his studies of business organizations in the 1940s and 1950s, Simon pointed out that well-designed hierarchies do the actual work without overwhelming anyone with meetings and memos the best way. But as Holland pondered the question, it became increasingly clear to him that the even more important reason lay at a deeper level, because this hierarchy of building blocks could revolutionize a system's ability to learn, evolve, and adapt.Think of our cognitive building blocks, which contain concepts like red, cars, and roads.Once the building blocks of this set of categories are twisted, refined, and adjusted with experience, the entire set of concepts can be adapted and recombined into many new concepts, such as "a red Saab on the side of the road." car".Of course, this is a much more efficient route to innovation than starting entirely from scratch, which in turn implies a whole new mechanism for adaptation in general.An adaptive system is able to reorganize its building blocks to make giant leaps without always having to slowly progress step by step through an infinite space of possibilities. Holland's favorite example in this regard is the pre-computer approach to police drawing a portrait of a suspect based on eyewitness descriptions—that is, dividing a suspect's face into ten basic areas: hairline, forehead, eyes. , nose, all the way to the jaw.Then the portrait artist draws different shapes of various parts on many pieces of paper, for example, ten kinds of noses, ten kinds of hairlines, and so on.This adds up to a picture of a hundred sheets of paper.With these, the portrait artist can put together the appropriate parts through the descriptions of the witnesses, and quickly draw a portrait of the suspect.Of course, the iconographer cannot draw all conceivable faces in this way.But he or she can always get an approximate portrait: by recombining the hundred pieces of paper, the portrait artist can come up with ten billion different faces, enough to find similar looks from the vast space of possibilities. "So if I can discover the process of forming building blocks, these combinations can work for me, not hinder me. I can describe a lot of complex things with relatively few building blocks." This, he realized, was the key to unlocking the polygenic mystery. "The giving up and trying in the evolution process is not just to form a good animal, but to find good building blocks and combine these building blocks to produce many good animals." The challenge is to show with precision and rigor how this all happened.The first step, he decided, was to create a computer simulation, a "genetic algorithm" that could both illustrate the process and help him clarify the issues in his mind. Those in the U-M computer science community were used to seeing Holland running up to him with a fan-shaped computer printout. "Look at this!" He would anxiously point to a piece of paper that was densely packed with hexadecimal data symbols. "Oh, CCB1095E. Awesome, John." "No! No! Do you know what that means!?" In fact, in the early 1960s, quite a few people did not know and could not figure out what those data meant.Holland's skeptical colleagues were right to be skeptical of Holland's work in at least one respect: The genetic algorithm that Holland eventually came up with was an oddball.Except in the most literal sense.Otherwise it wouldn't be a computer program at all.In terms of its internal mechanics, it is more like a simulated ecosystem, in which all the programs can compete with each other, mate with each other, and reproduce generation after generation, always evolving towards the solution of any problem set by the programmer. To put it mildly, this is not the usual way programs are written.So Holland found that the best way to explain to his colleagues why it made sense was to tell them what he was doing in very practical terms.He would usually tell them that we think of computer programming as a series of instructions written in a special programming language like FORTRAN or LISP.Indeed, the whole art of programming consists in making sure that the program is written in the correct order and instructions exactly.This is obviously the most efficient way to program - if you already know what you want the computer to do.But suppose you don't know what you want the computer to do, say you want to find the maximum value of some complex mathematical function.A function could represent profit, or the output of a factory.or anything else.The world is full of things that want to maximize value.Indeed, computer programmers have devised advanced computer algorithms for this purpose.But even the best algorithms among them are not guaranteed to deliver the correct maximized value in every situation.At some level, these algorithms always have to rely on the traditional trial/error method, or guesswork. If that's the case, Holland told his colleagues, it might be worth trying to use nature's trial-and-error method, known as natural selection, if you're going to have to rely on trial and error anyway.Instead of writing a program to perform a task that you don't even know how to define, let them arise naturally through evolution. Genetic Algorithm is one such method.If you want to see how it works, Holland says, forget about FORTRAN coding and go deep into the heart of the computer.A computer program is represented on a computer as a sequence of 1s or 0s in binary: 11010011110001100100010100111011... In this form, a computer program looks like a large piece of chromosome.Each binary digit is a separate "gene".Once you think about binary coding biologically, you can evolve it in a similar biological way. First, the computer generates a population of chromosomes of about 100 numbers, containing a large number of random variables, Holland said.Assume that each chromosome corresponds to one zebra in a herd (this is to simplify things. Because Holland is trying to grasp the most basic nature of evolution, he discards such things as horseshoes, stomachs, and brains in the genetic algorithm details, and simulate the individual as a single piece of pure DNA. Moreover, in order to make it easier to manipulate, he limited the binary chromosomes to a length of no more than a few dozen binary digits, so these chromosomes are not actually complete program, but only fragments of the program. In fact, in his original experiments, the chromosomes represented only a single variable. But this does not adapt the basic principles of the algorithm). 第二,把现有的问题当作每一单个的染色体,把问题当作计算机程序来运作,用这种方法来进行测试。然后,评价它的运行好坏,给它打个分。从生物学的角度来看,这个分数将评判出个体的“强健”程度,也就是它繁殖成功的概率。个体的强健程度越高,被基因算法选择出来,得以将自己的基因遗传给下一代的机会就越大。 第三,将你所选择的个体当作具有足够繁殖能力的染色体,使它们相互交配,从而繁衍新的一代。让剩余的染色体自行消亡。当然,在实际操作时,基因算法舍弃了两性的差异、求偶礼仪、性爱动作、精子和卵子的结合,以及两性繁衍的所有复杂细节,而只是通过赤裸裸的基因材料的交换繁衍下一代。如果用图解来表示的话,基因算法选择了有ABCDEFG的染色体和有abcdefg染色体的一对个体,随意在中间切断它们的染色体序列,然后将双方染色体相互交换,形成对它们的一对后代的染色体: ABCDefg和abCdEFG(真正的染色体经常会发生这种交换,或交叉,荷兰德从中得到启发)。 最后,通过这种基因交换繁衍出来的下一代之间又会继续相互竞争,同时在新一代的循环中,与它们的父母也发生竞争。这无论是对基因算法来说,还是对达尔文的自然选择法来说,都是最关键的一环。没有两性之间的基因交换,新的一代就会完全像他们的父母一样,物种的发展就会进入停滞状态。低劣的物种会自然消亡,但优良的物种也决不会发生任何改良。但有了两性之间的基因交换,新一代就会相似于它们的父母,但又有所不同,有时会比它们的父母强些。当发生这种情形的时候,被改良的物种就会获得普及的大好机会,从而显著地改良自己所属的整个物种群。自然选择法提供了一种向上进取的机制。 当然,在真正的生物体中,相当大一部分的变量是由于突变、遗传密码的排版错误所致。事实上,基因算法确实也允许通过故意将1改变为0,或把0改为1而产生一些偶然的突变。但对荷兰德来说,基因算法的核心是两性交换。不仅仅是因为两性的基因交换给物种提供了变量,而且这同时也是一个极好的机制,通过这个机制可以寻索到能够相互密切配合,产生高于一般水平的强健的基因群,也就是建设砖块。 比如,你将基因算法用于解其中一个最佳化的问题。这是个为某种复杂功能寻找最大价值的方式的问题。假设当基因算法的内在数群中的数字染色体达到二进制基因的某种模型时,比如像11####11#10###10,或##1001###11101##,获得了很高的分数(荷兰德用#来表示“没有关系”。数字处于这个位置可以是0,也可以是1)。他说,这种模型就具有建设砖块的功能。也许它们凑巧表示的是变量的范围,在这些范围中,其功能确实具有超常的高价值。但不管是什么原因,含有这种建设砖块的染色体都会繁荣发展,并普及于整个物种,从而取代那些不含有这类建设砖块的染色体。 另外,既然两性繁衍使数字染色体能够在每一代都重组它们的基因材料,那么物种就会经常产生新的建设砖块和现有建设砖块的新组合,这样基因算法就会很快产生具有双倍和三倍优势的建设砖块。而如果这些建设砖块的组合又产生出更大的优势,那么具有这些优秀建设砖块的个体特色就会比以往更快地普及于整个物种。结果就是,这个基因算法会很快指向现有问题的答案,即使事先并不知道从哪儿寻找答案。 荷兰德记得当他在六十年代初刚发现这一点时感到非常激动。但他的听众却从未为此而欢欣鼓舞。那时候,在尚属新兴的计算机科学领域里,大多数计算机科学家都感到,在常规性编程方面尚有大量的基础研究要做。从纯粹实际的角度来说,演化一个程序的概念显得不着边际。但荷兰德不在乎这些。这正是他自决心要发展费舍尔的独立基因假设以来一直苦苦探索所获得的成果。繁殖和交叉为基因的建设砖块提供了涌现和共同演化的机制,同时又是物种个体高效率地探索于可能性空间的机制。事实上,到六十年代中期,荷兰德已经证明了基因算法的基本定理,他称其为图解定理:在繁衍、交叉和突变之中,几乎所有具有超常强健性的紧密基因群都能够在物种中成指数比例地发展。(荷兰德所说的“图解”,是指任何特定的基因模型。) 他说:“当我最终将图示定理发展到令我满意的地步后,我才开始着手写书。”
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book