Home Categories Science learning complex

Chapter 24 infinite space of possibilities

complex 米歇尔·沃尔德罗普 7293Words 2018-03-20
infinite space of possibilities Holland loves to play games and enjoys playing all games.During his nearly thirty years at Ann Arbor, he played poker every month.One of his earliest memories is of watching the grown-ups play cards at his grandfather's house, and wishing he'd grown up enough to sit at the table and play together.He learned to play chess from his mother when he was in the first grade of elementary school.His mother was also a good bridge player.Holland's whole family is keen on sailing, and Holland and his mother often regatta.Holland's father was a first-rate gymnast and an active outdoorsman.Holland practiced gymnastics for several years in junior high school.The family was always changing games: bridge, golf, croquet, go, chess, checkers, whatever they could play, they couldn't play.

But somehow, for him, the game has long been more than just fun.He began to notice that there was a special fascination with certain games that went beyond winning or losing.For example, when he was a freshman in high school, around 1942 or 1943, when his family lived in Van Water, Ohio, he and Invent new games in Putt's basement.Their crowning invention was a war game that took up most of their basement, inspired by newspaper headlines.There are tanks and cannons in this game, as well as launch meters and range meters.They've even invented something that masks parts of the game map to simulate a smoke screen."The game got pretty complicated," Holland said. "I remember we used the mimeograph in my dad's office to make drawings for the war game." A series of soybean processing plants have prospered and developed.)

Holland said: "We didn't describe chess like you did, but we actually played chess that way because all three of us were interested in playing chess. Chess is a game with very few rules of the game, but It's unbelievable that no two games can ever be the same in chess. The possibilities of moves are literally endless, so we try to invent games of the same nature." He laughs and says he's been inventing games in one way or another ever since. "I like to say when things change: 'Hey, is that really what we assumed?' because if it turns out that my hypothesis is correct, if the underlying laws of the evolution of the subject of things are indeed under some control, and It's not up to me, so I'm going to be surprised. But I'm not going to be happy if the result doesn't surprise me, because I know I got it because I set it up from the start everything."

Now of course we call this sort of thing "emergence."But Holland's fascination with emergence led him to devote his life's love to science and mathematics long before he heard the term.He is never satisfied in the fields of science and mathematics.He said that throughout his middle school years, "I remember going to the library and reading all the books related to science. I was determined to be a physicist when I was in the second grade of middle school." The fascination of science What makes him different is not that science enables him to reduce the universe to a few simple laws, but just the opposite: science can tell you how a few simple laws produce the ever-changing behavior of the entire world. "It really makes me very happy. In a sense, science and mathematics are extremely simplistic. But if you look at the other side of the universe and look at the laws of the universe, the possibilities for surprises are almost endless. ...which is why the universe is so comprehensible at one extreme and never possible at the other extreme."

Holland entered the Massachusetts Institute of Technology in the fall of 1949.It didn't take him long to discover that computers had qualities that surprised him as well.He said: "I don't really know where this characteristic of computers comes from. But I have long been fascinated by 'thinking programs', that is, you only need to program a small amount of data in the computer, and it can make it do things. All that kind of stuff like integration. It just seemed to me like you could put so little in there and get an infinitely rich result.” But unfortunately, at first Holland was able to learn computer knowledge only sporadic second-hand information he acquired in the electrical engineering class.Electronic computers were still a novelty at the time, and most computer knowledge was still kept secret.Of course colleges don't have computer classes yet, not even at MIT.But one day, when Holland was browsing books and periodicals in the library as usual, he turned to a series of loose-leaf lecture notes wrapped in a simple thesis cover.As he flipped through the notes, he found that the notes detailed a 1946 seminar held at the Moore Department of Electrical Engineering at the University of Pennsylvania, in which it was recorded that during the war Penn State invented the first U.S. CNC computer ENIAC. "These notes are famous. This is the first time I have come into contact with real detailed information about numerical control computers, including detailed records from computer construction to software design. This series of lectures is based on this discussion of information and information processing. and explained a new mathematical skill: programming. Holland immediately bought a copy of this lecture and read it page by page many times. In fact, he still has this lecture reserved.

In the fall of 1949, when Holland began his senior year at MIT, looking around for a bachelor's thesis topic, he discovered the Whirlwind Project: MIT would build a computer system capable of tracking air traffic at speeds of "real-time" computers.The annual funding for the Navy-funded Cyclone program was $10,000, a dizzying amount at the time.MIT employed seventy engineers and technicians, and it was certainly the largest computer project of its time, and one of the most inventive.The Whirlwind will be the first computer with magnetic core memory and interactive display screens, and it will enable computer networking and multiprogramming (running multiple programs at once).As the first real-time computer, it would pave the way for the use of computers in air traffic control, industrial process control, and in advance ticketing and banking.

But when Holland first heard the news, the whirlwind was only in the experimental stage. "I know that MIT is developing a cyclone. It hasn't been successfully developed yet, it's still under development, but it's ready to use." For some reason, he wanted to get involved.He started knocking on doors and found a Czech astronomer named Zednek Kopal in the electromechanical department who had taught him numerical analysis. "I convinced him to chair my thesis review committee, and I got the physics department to agree to let someone from the electrical engineering department chair my thesis review committee, and then I convinced the people involved in the cyclone project to let me see their operation manual. At that time The operating manual is confidential!"

"That was probably the happiest year I had at MIT," he said.Coppel suggested that the title of his dissertation be to program a whirlwind to solve Laplace's equations.Laplace's equations describe a variety of physical phenomena, from the distribution of an electric field around any charged object, to the vibration of a taut drumhead.Holland immediately embarked on this research. This is not the easiest dissertation to do at MIT.At that time, no one had heard of languages ​​like Pascal, C, or FORTRAN.Indeed, computer programming languages ​​that translate commands to computers into numerical codes were not invented until the mid-1950s.At that time, there was not even a general decimal language, and it was still hexadecimal.He took longer than he expected to work on his dissertation, and in the end he had to apply to MIT for twice as long as normally allowed to complete his bachelor's thesis.

But he was very enthusiastic about the research. "I liked the logical nature of the process," he recalls, "programming has the same characteristics as mathematics: you take one step, and then you can take the next step from it." But more importantly, programming for Whirlwind The program made him realize that computers don't just perform fast calculations.In a series of mysterious six-decimal numbers, he can design anything from a vibrating drumhead to a swirling electric field.In the cycle of digits, he can create imaginary universes.All that is required is to encode the appropriate regularity, and the rest will unfold naturally.

Holland's graduation thesis was just a written design from the beginning. The program he compiled never actually worked on the cyclone, but in another respect, his graduation thesis was very rewarding: he became one of the few One of the guys who knows some programming.As a result, he was hired by IBM just after graduation in 1950. The timing couldn't have been better.At the time, IBM's huge factory in Poughkeepsie, New York, was designing the first commercial computer: the Defense Computer, which was later renamed the IBM 701.Designing and producing the computer at the time represented a big bet with an uncertain future.Many conservative administrators felt that developing such a computer was a waste of money and that money would be better invested in an improved hole punch.In fact, Product Planning spent a full year in 1950 insisting that there would never be more than 18 such computers in the national market. The main reason IBM insisted on developing a defense computer was because it was the love project of a rising star named Thomas Jr.Thomas Jr. was the son and heir apparent of IBM's aging president, Thomas B. Watson.

But Holland was only twenty-one at the time, and little was known about it.All he knew was that he had been placed in a holy place. "I've come here, at such a young age, in such an important position. I'm one of the few people who knows what's going on with IBM 701." IBM's project leader placed Holland in a seven-person Logical planning group.This group was responsible for designing the instruction set and general organization of the new computer.This was another piece of luck for Holland, as it was an ideal place to practice his programming skills. "After the initial phase was done, we got the first prototype of the machine, which had to be tested in various ways. So engineers often worked all night, taking the machine apart during the day and putting it back together as best they could at night. Then A handful of us would start at eleven o'clock at night and run our program through the night to see if it worked." To some extent, our programs do work.Of course, by today's standards, the 701 looks like something out of the Stone Age.It has a huge control panel crammed with various keyboards and switches, but no rudiments of an on-screen display yet.The machine executed input and output commands through standard IBM hole punches, and boasted a memory storage capacity of four thousand bytes (personal computers on the market today typically have a memory storage capacity of a thousand times greater).It can calculate the result of multiplying two numbers in thirty microseconds. (All hand-held calculators out there today are more powerful than this one.) "The machine also has a lot of flaws," says Holland. "At best, it makes an error every 30 minutes or so on average, so every calculation we do is Do it twice." To make matters worse, the 701 computer stored data by producing spots of light on the surface of a special cathode-ray tube.So Holland and his colleagues had to adjust the algorithm to avoid writing data to the same spot in the memory store too often, which would increase the charge on the surface of the cathode ray tube at that spot and affect surrounding data. . "It's amazing that we can actually make a computer work," he laughs.But in fact, he thinks that the flaws are not concealed. "The 701 computer was like a giant to us. We thought it would be nice to have time to try out our programs on a fast-moving machine." They have no shortage of programs to experiment with.The most primitive and earliest computers embraced a frenzy of new concepts such as information theory, cybernetics, and automata that didn't exist a decade ago.Who knows what the limits are?Almost anything you try can break new ground.What's more, for more philosophical pioneers like Holland, these big, unwieldy computers, chock-full of wires and vacuum tubes, opened up entirely new ways of thinking.Computers may not be the "big brains" that newspaper Sunday Supplements luridly describe.In fact, in the details of their structure and operation, they have nothing in common with the human brain.But in a deeper and more important sense, computers are a lot like the human brain.A very tempting speculation is that both computers and human brains are information processing devices.For if this is true, then thinking itself can be understood as a form of information processing. Of course, nobody called this kind of thing "artificial intelligence" or "cognitive science" back then.But even so, computer programming itself, as a new venture, is forcing people to think more carefully than ever about what problem solving really means.The computer is finally an alien: you have to tell it everything: what is data?How are they converted?How to get from this step to that step?These questions, in turn, quickly lead to questions that have vexed philosophers for centuries: What is knowledge?How is knowledge acquired through sensory impressions?How is knowledge reflected in thinking?How was it perfected through learning from experience?And how is it used in reasoning and judgment?How are the decisions that have been made translated into actions? The answers to these questions were far from clear then (and, in fact, are still unclear now).But the questions are being asked with unprecedented clarity and precision. IBM's development group at Puff Gipps, one of the greatest concentrations of computer geniuses in the United States, was suddenly at the forefront of computer development.Holland likes to recall that a group of "regular non-customers" would get together one night every two weeks or so to discuss a game of poker or the game of Go.One of the participants was a summer intern named John McCarthy, a young graduate student at Caltech who went on to become one of the founders of artificial intelligence. (In fact, it was McAfee who coined the term "artificial intelligence" in 1956 while promoting a summer seminar on artificial intelligence at Dartmouth College.) The other was Arthur Samuel, a soft-spoken electrical engineer of about forty.He was recruited by IBM from the University of Illinois to help the company build reliable vacuum tubes, and was Holland's most frequent companion in all-night marathons of programming. (He also had a daughter in nearby Fanshaw, with whom Holland dated a few times.) Samuel had apparently lost interest in vacuum tubes.For five years he's been trying to write a program that can play checkers -- not just play it, but get better at it with experience.In retrospect, Samuel's computer checkers is considered a milestone in artificial intelligence research. In 1967, after he completed the modification and perfection of this checkers-playing program, this computer checkers player has been able to reach the level of an international master.Even up to the time of the 701 machine, his programming appeared to be quite good.Holland remembers being very impressed with this, especially its ability to adjust its tactics to the opponent's moves.Roughly speaking, this is because the program designs a simple model of the "opponent" and uses this model to predict the best moves.Although Holland couldn't express it clearly at the time, he felt that this function of computer checkers just captured some of the most essential things about learning and adaptation. But because Holland had other things to think about, he put those thoughts aside.He was busy with his own research project at the time.He works with simulations of the inner workings of the brain.He remembers that the research began in the spring of 1952, when he was listening to a lecture by the MIT psychologist J.C.R. Licklider.Licklider came to visit Puffjipps's laboratory and agreed to discuss the hottest topic in the field at that time, the new theory of learning and memory by Donald O. Hebb, a neurophysiologist at McGill University in Montreal. this speech. The problem, Licklider explained, is that under a microscope, much of the brain appears to be a mess, with each cell randomly sending out thousands of fibers and randomly connecting to thousands of other nerve cells.However, these densely connected networks are obviously not formed randomly.A healthy brain is capable of forming feelings, thoughts and actions in a coherent manner.What's more, the brain is clearly not static.It can improve and adjust its behavior by learning from experience.It can be learned.But the question is, how does it learn? Three years earlier, in 1949, Heber had given his answer in his book The Organization of Behavior.His basic idea was to posit that the brain constantly makes subtle changes in "synapses."Synapses are junctions where nerve impulses jump from one cell to another.This assumption was very bold for Heber, since he had no evidence for it at the time.But Heber argues for the hypothesis that these synaptic changes underlie all learning and memory.For example, a sensory impulse to see through the eye leaves its trace on its neural network by strengthening all the synapses along the way.Much the same happens with the auditory nervous system entering from the ear, or other inter-brain activity in the brain.The result is that networks that start at will quickly organize themselves.Experience is built up through a sort of positive feedback: strong, frequently used synapses grow stronger, while weak, infrequently used synapses atrophy.After the frequently used synapses are finally strong enough, the memory is locked.These memories, in turn, fill the entire brain, with each synapse corresponding to a complex synaptic morphology that encompasses tens of thousands of neurons. (Heber was one of the first to describe this kind of distributed memory, a description that came to be known as a "connectionist.") But Heber's thinking goes beyond that.Licklider's talk also explained Heber's second hypothesis: that selective synaptic strengthening causes the brain to self-organize "assemblies"—subcombinations of a few thousand neurons in which recurrent nerve impulses are self-reinforcing. , to continue the loop.Heber sees these collections of cells as the basic information-building bricks of the brain.Each collection of cells corresponds to a tone, a ray of light, or a flash of a thought.But there's nothing physiologically special about this collection of cells.Indeed, they overlap each other, and any one neuron belongs to several cell assemblies.And because of this, the actions of one collection of cells will inevitably drive the actions of other collections of cells, so that these basic building blocks will quickly self-organize into larger-scale concepts and more complex behaviors.In short, the collection of cells is the basic quantum of thought. Holland sat in the auditorium listening dumbfounded.This is not the dry stimulus/response psychology advocated by then-Harvard behaviorist B.F. Skinner.Heber is talking about the inner workings of the mind.The richness and perpetual surprise of his relational theory elicited strong responses from Holland.The theory feels right.Holland couldn't wait to do something with the theory.Heber's theory is like a window into the nature of thought.He wanted to look out the window, to see collections of cells self-organizing and growing in haphazard chaos, to see how they interacted with each other, and how thinking itself emerged.He wanted to observe how all of this happened naturally without outside guidance. As soon as Licklider finished his lecture on Heber's theory, Holland said to Nathaniel Rochester, the head of the 701 computer group: "Well, we have such a prototype computer, let's do it. Write a simulation program for a neural network." And that's exactly what they did. "He wrote a program and I wrote a program. The two programs are very different in form. We call them 'conceptors' and that's not arrogant!" In fact, even four decades later, when neural network simulations have long since become the standard tool of artificial intelligence, the achievements of IBM's "conceptors" are still remarkable, and the basic ideas are still very familiar today.In their program, Holland and Rochester treated the artificial neurons they simulated as "nodes" -- little computers that remember something about their own internal state.They modeled their artificial synapses as abstract junctions between various nodes, each with a certain "weight" corresponding to the strength of the synapse.They also modeled Heber's learning rule by modulating intensity through network learning.But Holland, Rochester, and their colleagues also employed far more detailed knowledge of basic neurophysiology than most neural network simulations today, including how quickly simulated neurons respond, how tired they are if a neuron responds too often How such factors. Not surprisingly, these studies of theirs have struggled.Not only because the programs they wrote were the original research into neural network simulations, but also because they enabled computers to be used for simulations (as opposed to computing numbers and analyzing data) for the first time.Holland spoke highly of IBM's cooperation and patience.He and his colleagues spent countless hours on computers simulating neural networks, and even made an IBM-funded trip to Montreal to consult with Heber himself. But in the end their simulation finally worked. “There were a lot of emergent phenomena.” Holland still gets excited talking about them today. "You can start with a uniform matrix of neurons and then see cell assemblies forming." In 1956, years after most of this work was over, Holland, Rochester, and their colleagues finally published the work. Research results.This is Holland's first paper published.
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book