Home Categories Science learning complex

Chapter 41 emerge

complex 米歇尔·沃尔德罗普 3118Words 2018-03-20
emerge First, Farmer says, this imaginary law would enable a rigorous explanation of emergence: What do we mean when we say the whole is greater than the sum of its parts? "It's not magic, but it's like magic when it's felt in our poor little brains." Flying "Birds" (and real-life birds) flock in response to the behavior of their neighbors; Organisms cooperate and compete in the dance of co-evolution to form delicately coordinated ecosystems; atoms seek out the smallest form of energy by forming chemical bonds with each other to form molecules, the well-known emergent structures; Buying, selling and trading to satisfy one's own material needs creates the well-known emergent structure of the market; human beings also satisfy indefinable desires through interactive relationships, forming families, religions and cultures.Groups of actors transcend themselves by continually seeking mutual adaptation and self-improvement into something greater.The key is to figure out the ins and outs of it all without falling into the quagmire of dry philosophical speculation or New Age speculation.

This, says Farmer, is the beauty of computer simulations broadly and artificial life narrowly: On a desktop computer, with a simple computer model, you can experiment with your ideas and see how they actually work.You can pinpoint vague ideas with increasing precision through computer experiments, and you can try to distill out the essence of how bursts actually work in nature.Moreover, there were a number of alternative computer models available at that time, of which Farmer was particularly interested in Connectionism: the concept of a network represented by a network of "nodes" connected by "connectors." Interactor group.On this point, he agrees with many others.During this decade or so, connectionist models suddenly popped up all over the place.The first example is the neural network movement.In the neural network movement, researchers exploit networks of artificial neurons to simulate things like perception and memory retrieval, and naturally launched an onslaught against the symbol-processing approaches to mainstream AI research.But not far behind is the base established by the Santa Fe Institute, including Holland's classifier system, Kaufman's gene network, and his and Packard and Los Alamos' Ellen Perel. The immune system model established by Sen in the mid-1980s to study the origin of life.Farmer admits that some of these models don't seem very connectionist, and many people are surprised when they first hear them describe things that way.But this is only because these models were built at different times and by different people to solve different problems, so the language they use to describe will also be different."When you restore everything, everything looks the same. You can actually just build one model and move on to another simulation," he said.

Of course, in neural networks, the node-association structure is very obvious.Nodes are equivalent to neurons, and associations are equivalent to synapses connecting neurons.For example, if a programmer has an image of a neural network model, he or she can simulate the flickering of light falling on the retina by activating certain input nodes, and then letting this activation propagate to the rest of the network. .This simulation effect is somewhat similar to transporting goods to the ports of a few coastal cities, and then letting countless transport vehicles transport these goods to inland cities through the highway.But if these associations are poorly arranged, the network will quickly fall into a self-unified pattern after being activated, which is equivalent to recognizing the scene: "This is a cat!" And, even if the input The data is very noisy, very fragmented, or for that matter, the network will act even if some nodes are burnt.

In classifier systems, the node-associate structure is rather ambiguous, yet it does exist, Farmer said.A set of nodes is the set of possible internal advertisements, like 001001110111110, and the associated objects are the classifier rules.Each rule looks for a notice on the system's internal notice board, and responds to the found notice by posting another notice.By activating certain input nodes, that is, by posting relevant notices on the notice board, the programmer can have the classifier activate more notices, and then more notices.The result is a cascade of notices, akin to spreading activations throughout a neural network.And, just as a neural network eventually settles into a self-improving state, a classifier system eventually settles into a stable state of active posters and classifiers that solve the problem at hand.Or, to paraphrase Holland, it represents an emergent mental model.

This network structure also exists in the models of autocatalysis and the origin of life established by him, Kaufman and Packard.In their model, the set of nodes is all possible populations of polymer species, such as abbcaad, and the associations are the chemical reactions in the simulated polymer population: Polymer A catalyzes Polymer B, and so on.By activating specific input nodes, a steady flow of tiny "food" polymers into the system in this simulated environment triggers a cascade of reactions.And these reactions would eventually settle down, forming a self-sustaining living polymer and pattern of catalyzed reactions: that is, what they hypothesized was an "autocatalytic group" of some kind of primitive organism that sprung up from the primordial soup.

The same holds true for Oldfman's gene-network model and many others, Farmer says.These models all have the underlying node-association structure.Indeed, a few years ago, when he first recognized this commonality, he took pleasure in writing and publishing all this in a paper entitled "The Rosetta Stele of Connectionism".In this paper, Farmer says that the existence of a common framework removes all our doubts, because the blind elephant-touchers have at least put their hands on the same elephant.And that's not all, this common framework removes the barriers of different terminology for those working on these computer models, making it easier than ever to communicate with each other. "In this paper, I think it's important that I design an actual translation mechanism between the models. I can take the model of the immune system and say: 'If this was a neural network, it could be like this to see the model.'"

But perhaps the most important reason for having a general framework, Farmer says, is that it helps you distill the essence of various models, allowing you to turn your attention to the realities that emerge from those models.In this case, it's clear that power does lie in correlations, and that's why so many people get so excited about correlation theory.You can start with very, very simple nodes, linear "polymers", "signposts" that are nothing more than binary math, and "neurons" that are basically just switches that turn on and off.And yet they can produce surprisingly complex results just by interacting.

Take learning and evolution, for example.Since nodes are so simple, the overall behavior of the network is determined almost entirely by the interrelationships between nodes.Or in Langton's words, the pan-genotypic code of the network is encoded in interconnections.Therefore, if you want to improve the general phenotype of this system, you only need to change the interrelationships between these nodes.In fact, you can change this correlation in two ways, Farmer said.The first approach is to leave the connections where they are, but improve their “strength,” which is what Holland calls extractive learning: improving what you already have.In Holland's classifier system, this change is achieved through the bucket brigade algorithm.The algorithm rewards classifier rules that lead to benign outcomes.In neural networks, this is achieved through various learning algorithms.The algorithm learns to feed the network a series of known inputs, and then strengthen or weaken the strength of the association until the association responds appropriately.

The second, more thorough way to adjust the association is to change the entire wiring layout of the network, remove some old association points, and place new association points.It's the Holland equivalent of exploratory learning: taking big risks for big success.For example, in Holland's classifier system, through the mating of the sexes, a new version that cannot be simulated is produced, so as to achieve the mutual mixing of genetic algorithms, which is exactly the case.The resulting new rules often bring in new information that was never there before.This also occurs in the autocatalytic group model, where new polymers occasionally form automatically, just as they do in the real world.The resulting chemical linkages could open the door for autocatalytic groups to explore a whole new world in the polymer space.But this is not the case in neural networks, because the connections in neural networks are originally analogs of immobile synapses.But in recent experiments by many neural network fans, neural networks can indeed be rewired through learning.Their reasoning is that any fixed wiring layout is arbitrary and should be allowed to change.

In a nutshell, Farmer says, the concept of connectionism shows that learning and evolutionary functions can emerge even when nodes and individual actors are mindless dead creatures.More broadly, this concept points the way very precisely to a theory: namely, that what matters is the strengthening of the points of connection, not the strength of the nodes, which is what Langton and the artificial life scientists call life's The essence lies in the organization, not in the molecules.This concept also allows us to gain a deeper understanding of the formation and development of life and mind from nothing in the universe.

Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book