Home Categories Science learning complex

Chapter 39 Artificial Life Papers

complex 米歇尔·沃尔德罗普 7394Words 2018-03-20
Artificial Life Papers Longton was very aware of Farmer's good intentions, in fact, he had already understood Farmer's intentions.No one was more eager to complete his doctoral thesis as soon as possible than himself.Since the symposium on artificial life, his research has made great progress.He has moved the cellular automata model that was originally running on the computer at Mississippi University to the SUN workstation in Los Alamos. He has also done a large number of computer experiments to detect phase transitions at the edge of chaos, and even read in depth. The data and literature in physics have done research on how to use purely statistical methods to analyze phase transitions.

But the year passed, and he hadn't had time to actually write the paper.Since the symposium on artificial life, he has spent most of his time working after the symposium.He was asked by George Cowin and David Paines to edit and publish the symposium papers on behalf of the Santa Fe Institute as part of a series of books on complex science that the Institute was preparing to publish.But both Paines and Cowan required that these papers be strictly reviewed by scientists outside the institute in accordance with the rules for publishing articles in other scientific journals.Santa Fe, they told Langton, must not be rash.Artificial life must be a science, not a video game.

Longton strongly agrees with this view, and he has always believed so himself.But as a result, he had to spend months editing the papers, which meant reading each of the forty-five papers four times, sending each to several reviewers, and Reviewers' revisions and rewriting comments are sent to the original authors, and a method must be found to coax all authors to complete the task on time.He then had to spend several more months writing the foreword and introduction to the book."It's going to take a lot of time to do that," he sighs. But on the other hand, the whole process was extremely educational for him. "It's like doing PhD qualification research, you have to learn to discard the dross and take the best, which made me really an expert in this field." requirements.Langton felt that he had created much more than a series of treatises.His doctoral thesis may still be in limbo, but the symposium's output lays the groundwork for turning artificial life into a serious science.And Langton, while distilling the thoughts and insights of those who attended the Artificial Life Symposium into the book's preface and forty-seven pages of introduction, has written one of the clearest manifestos for the gist of artificial life.

In this "manifesto," he wrote, artificial life is essentially the opposite of conventional biology.Artificial life does not use an analytical method—not a method of dissecting living species, organisms, organs, textures, cells, and organ cells—to understand life. Artificial life uses a comprehensive method to understand life.That is, combining simple components in an artificial system to produce life-like behavior.The tenet of artificial life is that the characteristics of life do not exist in a single substance, but in a combination of substances.It operates on the principle that the laws of life must be the laws of its dynamic form, independent of any particular carbide detail that happened to form on Earth four billion years ago.Artificial life will use new media such as computers, or perhaps robots, to explore the possibilities of other developments in the field of biology.Artificial life researchers will be able to achieve what space scientists have done by launching space probes to other planets.That is, observing what is happening on other planets from the height of the universe, so as to gain a new understanding of our own world. "Only when we can look at 'known life forms' in the sense of 'possible life forms' can we truly understand the nature of the beast."

Seeing life in terms of abstract organization was perhaps the most compelling idea to emerge from the Artificial Life Symposium, he said.It is no accident that this idea is closely related to computer science.There are many common sources of knowledge between the two.Humans have always explored the mystery of automata, that is, how machines can generate their own behavior.Since the time of the Pharaohs, Egyptian craftsmen have invented clocks using the principle of water droplets.In the first century AD, Hiro of Alexandria wrote a treatise on gas mechanics.In this paper, he described how pressurized gases could produce simple motions in various small machines resembling animals and humans.More than a thousand years later, after Europe entered the era of the great clock industry, craftsmen in the Middle Ages and the Renaissance designed increasingly sophisticated clocks that could strike the time.Some public clocks even have many digital symbols, which have a full set of functions for timing and telling time.During the industrial revolution, more sophisticated process control technology was developed from the clock automation technology, that is, the factory machinery was operated by a complex set of rotating cams and interconnected manipulators.Nineteenth-century designers combined the improved techniques of movable cams and rotating drums with movable pins to develop a controller capable of producing multiple sequences of motion on the same machine.With the development of computing machines in the early twentieth century, "the introduction of this programmable controller became the rudiment of the early development of general function computers."

At the same time, logicians were turning the program of logical steps into a formal concept, thereby laying the foundation for a general theory of computers.In the early twentieth century, Alonzo Church, Kurt Godel, Alan Turing, and others pointed out that no matter what material a machine is made of, mechanical processes The essence, the "thing" that causes a machine to behave, is not the machine itself at all, but an abstract control structure, a program that can be represented by a set of rules.Langton said that it was this abstraction that allowed you to take a piece of software out of one computer and plug it into another: the "mechanism" of the machine was in the software, not in the hardware.This is exactly what Langton learned at Massachusetts General Hospital eighteen years ago.And once you accept that, it's easy to understand that the "life force" of an organism is also in its software, that is, in the organization of the molecules, not in the molecules themselves.

But Langton admits that this leap of awareness is not as easy as it seems, especially when you consider how fluid, spontaneous and organic life appears, and how controlled computers and other machines appear. Accepting this realization is even more difficult.At first glance, it seems absurd to even talk about living systems from the point of view of machines. But the answer lies in further great insights, and this is a recurring theme at the Artificial Life Symposium: It is true that living systems are like machines, but the machines that are living beings are different from machines in general. completely different form of organization.Living systems always seem to work from the bottom up.Emerging from a large population of extremely simple systems, not the kind of machines that engineers design from the top down.A cell contains many proteins, DNA and other biomolecules; a brain contains many neurons; an embryo contains many interacting cells; an ant kingdom contains many ants.In this sense, an economy consists of many companies and individuals.

This is, of course, the very concept that Holland and colleagues at the Santa Fe Institute seek to emphasize in their general theory of complex adaptive systems.The difference is that Holland sees this population structure primarily as a pile of building blocks that can, through various recombinations, produce very efficient evolution, whereas Langton sees it primarily as capable of producing a rich, life-like opportunity for motivation.Langton finally summed it up in italics: "The most amazing insight we have gained from computer simulations of complex physical systems is that complex behavior does not arise from complex underlying structures." "Indeed, extremely interesting complex behavior emerges from extremely simple emerging from the group of elements."

This is Langton's sincere understanding.This exposition very clearly reflects his experience in discovering self-replicating molecular automata.This exposition also underscored one of the most vivid demonstrations at the Artificial Life Symposium: Knegie Reynolds' "Bed" swarm.Reynolds used only three simple rules in this computer model that were limited to the interaction between "Byrd" and did not write comprehensive, top-down detailed rules to tell "Byrd" How the group will act, and no rules have been written to tell the "Byrd" group how to follow the command of the leader "Byrd".But it is these local rules that make the "Bard" group have an organic adaptability to different situations.These rules always tend to pull "Byrd" towards concentration, in some ways like Adam Smith's invisible hand, always trying to balance supply and demand.But as is the case in the economic sphere, the tendency to agglomerate is nothing more than a tendency, and the result is that all Birds respond according to the behavior of their immediate neighbors, so when a group of Birds encounters a population like When there is an obstacle like a pillar, each "Bed" will go its own way, and the whole group will split into two groups without any difficulty, flowing around the obstacle on both sides.

Langton says that if you do this with a top-down set of rules, the whole system will be unbelievably cumbersome and complex to operate: the rules tell each "Bed" to act when it encounters each What specific actions should be taken in the event of a conceivable situation.He had indeed seen such systems, and they always seemed very silly and unnatural, more like a cartoon than lifelike.In addition, since it is impossible for this top-down system to take every situation into consideration, this system always becomes at a loss when it encounters complex situations, and it always behaves as rigid and fragile. Will stop abruptly amidst hesitation.

The plants illustrated by Aristid Lindenmeyer of Utrecht University and Prezemyslaw Prusinkiewcz of Regina University are also of this natural species. A product of bottom-up, group-thinking models.These pictorial plants are not drawn on the computer screen, but "planted" on the computer screen.They start out as individual stems, and then there are some simple rules that tell each stem how to produce leaves, flowers, and more branches.These rules also do not contain information about what the final overall shape of the plant should be, but only simulate how the many cells of the plant are differentiated and interact with each other during the growth of the plant.But these rules produce shrubs, trees, or flowers that look extremely realistic.In fact, carefully selected rules were able to produce computer plants that closely resemble known species. (And making even tiny changes to those rules can lead to completely different plants. This illustrates how easily, for evolution, small changes in development can lead to large changes in appearance.) Longton said that the artificial life workshop has repeatedly emphasized the theme that the way to obtain life-like behavior is to simulate simple units, not to simulate large and complex units.It is to exercise local control, not global control.Let behavior emerge from the bottom instead of being prescribed from the top down.When doing this kind of experimentation, focus on the behavior that is being produced, not on the end result.As Holland likes to point out, living systems never settle. Indeed, Langton says, when you look at this bottom-up concept as its logical conclusion, you see it as a new, pure science—vitalism.This ancient concept says that life contains some kind of energy, force, or spirit that transcends pure matter.In fact, life is indeed capable of transcending pure matter.Not because living systems are driven by some vital essence beyond physics and chemistry, but because a collection of simple things following simple rules of interaction can produce behavioral effects that are forever surprising.Life, he said, may indeed be some kind of biochemical machine, but to start this machine, "it is not to inject life into the machine, but to organize the various parts of the machine and make them interact so that it has 'life'." Langton concluded by saying that the third great insight distilled from the presentation at the Artificial Life Symposium was that life, in the sense that life is characterized by organization, not molecules, may not just be computer-like, but life at all. a method of calculation. To understand why, Longton says, one has to start with conventional carbon-based biology.Biologists have been pointing out for more than a century that one of the most distinctive features of a living organism is its genotype, the blueprint of genes programmed into its DNA.The structure of an organism is what these genetic blueprints create.Of course, in reality, the actual operation of living cells is extremely complex. Every gene is a genetic blueprint for every single protein molecule, and thousands of protein molecules interact in various ways in the cell where they are located. interacting.In fact, you can think of a genotype as a collection of many small computer programs running in parallel, one for each gene, and when they are activated, all these activated programs both compete and cooperate with each other, stuck in a in logical conflict.As a collective, these interacting programs are able to perform the overall computational task, which is the phenotype, that is, the structure that emerges during the development of an organism. Next, the same concept applies when moving from carbon-based biology to the more general biology of artificial life.To illustrate this fact, Langton coined the term generalized genotype, or GTYPE for short, to refer specifically to any combination of low-level rules.He also coined the term generalized phenotype, or PTYPE for short, to refer specifically to the structure and/or behavior resulting from these activated rules in a particular environment.For example, in a conventional computer program, the pan-genotype is obviously the computer code itself, and the pan-phenotype is the program's response to the data entered by the computer operator.In Langton's own simulations of self-reproducing molecular automata, the pan-genotype is the set of rules that specifically tell each cell how to interact with its neighbors, and the pan-phenotype is the overall pattern of behavior of that set of rules.In Reynolds's Bird program, pan-genotype is the three rules that guide the flight of each Bird, and pan-phenotype is the behavior of a group of Birds clustered together. More broadly, the concept of a pan-genotype is essentially the same as Holland's concept of an "internal model."The only difference is that Langton's concept emphasizes its role as a computer program more than Holland's.Not surprisingly, the concept of pan-genotypes is perfectly applicable to Holland's classifier system, and a pan-genotype in a particular system is just a set of classifier rules.The same concept applies to ecosystem models.In this model, an organism's pan-genotype consists of its offensive and defensive chromosomes.This concept also applies to Arthur's model of the glass house economy.In this model, the pan-genotype of an artificial agent is a set of rules of economic behavior learned through hard work.In principle, this concept applies to any complex adaptive system whose actors interact according to a set of rules.The pan-genotypes of these systems continue to develop and appear as pan-phenotypes, which is actually a kind of calculation. And the beauty of this concept is that once you see the relationship between life and computing, you can deduce a lot of theories from it.For example, why is life so full of unexpected events?Because in general, even in principle, we cannot predict from a specific set of pan-genotypes what kind of behavior their pan-phenotypes will produce.This is the undecidability theorem, one of the most profound discoveries of computer science: unless a computer program is completely insignificant, the quickest way to know the outcome is to run the program and see what it produces.No general-purpose program can scan computer passwords, enter data, and give you a result faster than this.Older generations think that computers only operate according to the instructions of programmers. This idea is completely correct, but in fact it has nothing to do with it.Any computer code, once complex enough to be interesting, will always produce behavior that surprises even the programmer.That's why any decent computer software package has to be tested and tuned over and over again before it hits the market, and that's why users are always quick to discover that the software can never be tuned to perfection.Most importantly for artificial life, pan-genotype and the concept of undecidability explain why a living system can be a biochemical machine completely under the control of a program, a pan-genotype, and still generate Surprising, spontaneous pan-phenotype behavior. Conversely, other profound theorems of computer science show that you can't apply this concept backwards, that you can't predetermine a certain behavior that you want, that is, a certain generic phenotype, and then find a set that produces The pan-genotype of this behavior.In practice, of course, these theorems are unlikely to prevent programmers from exploiting well-tested algorithms to exactly solve special problems in well-defined situations.But in the context of ill-defined, ever-changing living systems, there seems to be only trial and error to go. This is known as Darwin's method of natural selection.Langton points out that such a process can appear extremely brutal and has a long history.Natural programming is actually to build a variety of machines of different pan-genotypes formed at random, and then eliminate those machines that are not competent.This chaotic and lengthy process is actually the best choice nature can make.Likewise, Holland's genetic algorithm may be the only realistic approach to computer programming for ill-defined, messy problems."This is likely the only efficient general procedure for finding pan-genotypes with specific pan-phenotypes," Langton wrote. In writing his introduction, Langton is careful not to claim that the entities studied by artificial life researchers are "really" alive.They are obviously not alive.The "Bed" in the computer, the plant, and the self-reproducing molecular automaton, all of these are just simulations, a highly simplified model of life that would not exist without the computer.But despite this, since the whole point of artificial life research is to explore the most fundamental laws of life, it is unavoidable to ask this question: Can human beings finally create real artificial life? Langton finds it difficult to answer this question, because no one knows what a "true" artificial life is.Maybe some kind of genetically engineered superorganism?A robot capable of self-reproduction?Or an overeducated computer virus?What exactly is life?How can you know for sure that you have created life or not? Not surprisingly, the issue was widely discussed at the Artificial Life Symposium, where it was debated loudly and fervently not only in the halls, but also at dinner tables.Computer viruses were a hot topic, and many attendees felt that computer viruses had crossed the line, which was frustrating.Nasty computer viruses touch almost every measure of life.Computer viruses can be transferred to another computer by self-replication, or self-replicate to a floppy disk, and further reproduce and spread.They can store themselves in computer codes like DNA, and can use the functions of the main body (that is, computers) to realize their functions, just as real viruses can use the metabolic functions of molecules in infected cells.They can also respond to stimuli in their own environment (in the computer), and even mutate and evolve with the help of some computer game masters' twisted sense of humor.Computer viruses do survive in computer-controlled spaces and on computer networks.They cannot exist independently outside the material world, but this does not mean that they can be classified as living objects.Langton claimed that if life is really just a matter of organization, then, it should be said, a well-organized entity is alive, no matter what it is made of. But whatever the identity of the computer virus, Langton is convinced that "true" artificial life will one day be born, and soon.It will be born in the field of biochemistry, in the development of robotics and advanced software.And, regardless of whether Langton and his colleagues work on it, it will emerge for commercial and/or military use, which, Langton argues, makes artificial life research all the more important: if we If we are really advancing towards the wonderful new world of artificial life, then at least we should enter this realm with our eyes open. Langton wrote: "By the middle of this century, man has acquired the capacity to destroy all life on earth. By the middle of the next century, man will have the capacity to create life." Of these two capacities, it is difficult to say which This ability will bring us greater responsibility.Not only because a particular living entity will be able to survive, but because the process of evolution itself will increasingly fall under our control. This prospect made him think that everyone involved in the research of artificial life should read the book "Science Monster": the book makes it clear that the doctor who created the science monster refuses to take any responsibility for his creation . (Although there is no such shot in the movie), we must not let this happen.He points out that the consequences of the changes we are now making are unpredictable, even in principle.But we must take responsibility for the consequences.This in turn means that the meaning of artificial life must be openly debated and that there must be public engagement. Moreover, if you could create life, you would suddenly be embroiled in much larger questions than the technical definition of whether a computer virus is alive or not.Soon, you will find yourself involved in some kind of positive theology.For example, after you create a living object, do you have the right to ask this living creature to worship you and dedicate everything to you?Do you have the right to play God before it?Do you have the right to destroy it when it does not obey your orders? These are tough questions, Langton said. "Whether or not we already have the right answers to these questions, they must be asked honestly and openly. Artificial life is a challenge not only to science or technology, but also to our most fundamental social, moral, philosophical and religious A challenge of faith. Like Copernicus' theory of the solar system, it will force us to re-examine our place in the universe and our role in nature."
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book