Home Categories social psychology psychology stories

Chapter 28 Chapter 16 Cognitive Psychologist-2

psychology stories 墨顿·亨特 15502Words 2018-03-18
More than ten years ago, I asked the famous memory researcher Gordon Bauer some questions about thinking, and I was taken aback by his furious answer: "I don't do 'thinking' at all. I don't know What is 'mind'." How is it possible that the chair of Stanford's psychology department doesn't do anything with the mind at all, or even understand it at all?Then Ball said reluctantly: "I think you may mean the study of reasoning." Thought has traditionally been a central topic in psychology, but the explosion of knowledge in cognitive psychology in the 1970s has made the term less appropriate because it includes processes that are widely separated from each other, Such as temporary short-term memory and long-term problem solving.Psychologists like to speak of thought processes in more specific terms: 'extremely normalized', 'chunking', 'retrieval', 'categorization', 'formal operation', and a dozen others. 'Thinking' It has now slowly become a much narrower and more precise meaning than before: that is, the manipulation of knowledge to achieve a goal. However, to avoid any misunderstanding, many psychologists, such as Bauer, prefer Use the word "reasoning".

Although human beings have always regarded reasoning ability as the essence of human nature, the research on reasoning has been a backwater for a long time.From the 1930s to the 1950s, except for the problem-solving experiments carried out by Carl Denkel and some other Gestalt scholars, and the research on the characteristics of children's thinking processes at different stages of knowledge development carried out by Piaget and his followers, few people Conduct inferential research. However, with the advent of the cognitive revolution, the study of reasoning became an active field.Information processing models allow psychologists to formulate hypotheses that can be inferred in the form of flowcharts about what happens during different reasoning processes.And the computer is a good machine, and it can be used to test some hypotheses from now on.

Information processing theory and computers are mutually cooperative.A hypothesis about any form of reasoning can be described in terms of information processing, viewing them as specific steps in information processing.The computer can then be programmed to perform an analogous sequence of steps.If this assumption is correct, machines can arrive at the same results as human reasoning minds.Likewise, if a reasoning program written for a computer arrives at the same conclusions as humans about the same problem, one can assume that the program is operating in the same way as the human brain reasoning, or at least in a similar way. way of reasoning.

How does a computer perform such reasoning?Its program consists of a routine, or a set of instructions, plus a series of subroutines, each of which is used or not, depending on the results of the previous run and the information in the program memory.A common procedure is a series of if-then steps: "If input meets condition 1, take action 1; if not, take action 2. Compare condition 2 with result, if result is [greater than] less than or whatever else , then take action 3. Otherwise take action 4...store the resulting conditions 2, 3...then, depending on further results, use these stored items in one way or another."

But when computers execute these programs, whether in mathematical calculations or problem solving, are they really reasoning?Aren't they mindlessly carrying out pre-determined courses of action like automatons?This question is better left to philosophers.If a computer could prove an axiom, start an air craft, or determine whether a poem was composed by Shakespeare, as a knowledgeable human being can, who is to say that it is a mindless automaton, or a human being? Is it not such an automaton? In 1950, when only a few very primitive computers existed, the theory of computation was vigorously discussed among mathematicians, information theorists, and others.Alan Tring was a gifted British mathematician who came up with a test, more philosophical and less scientific, of whether a computer could think.In the test, a computer programmed to solve certain problems was placed in one room, a person familiar with such problems stood in another room, and in a third room was a referee who could communicate with the others by telegraph. The two rooms talk.If the judges can tell from the conversation which one is a computer and which one is a human, the computer passes the test: it can think.The validity of the Tring test is debatable, but at least it proves that if a computer seems to be able to think, it is doing something close to thinking.

(To date, no computer has been able to pass the Trin test. In November 1991, a Trin test was conducted at the Museum of Computing in Boston. 10 judges, chosen from the streets to represent that person, typed on their keyboards questions and instructions, and then see some answers on the screen, either from any of several computer programs, or from some unseen human being. The one program that finally won admission was a commercial program , called PC Therapists Third Generation. Here, we excerpt some questions and answers from the first edition of The New York Times, November 9, 1991: PC Therapist 3rd Generation: Maybe you don't get enough care from the other person in this partnership.

Referee: What are the most important key factors in these relationships in order to prevent conflicts or problems from arising? PC: I don't think you are thinking about things the way I am. Referee: How is this the answer to the question?Next question: Do you think it is unrealistic to have an affair between a man and a woman?Was it doomed from the start? PC: I'd rather live with my eyes closed. Referee: Have you ever encountered problems in an affair between a man and a woman? PC: I always try to make myself cute. Any referee who sees such incongruous answers as human answers must have a strange understanding of the human capacity for conversation. )

By the 1960s, most cognitive psychologists, whether they agreed that computers really thought or not, considered the theory of computation to be a conceptual breakthrough; it enabled them for the first time to process terminology with detailed and precise information Describe any aspect of cognition, especially reasoning.Moreover, having formulated hypotheses about the steps of any such program, they could translate them from words into computer language and test them on a computer.If the result is a successful run, it means that the mind is indeed reasoning in some way like this program.It is not surprising, then, that Herbert Simon said that computers are to psychology as microscopes are to biology.Not surprisingly, other zealots say that the human mind and computers are two species in the genus "information processing systems."

Solving power is the most important application of human reasoning.Most animals perform activities such as finding food, escaping predators, and building nests through innate or partially innate behavior patterns.The way humans solve or try to solve most problems is through learning or creative reasoning. In the mid-1950s, when Simon and Newell set out to create the first thought-stimulating program, the Logic Theorist, they asked themselves a question: How do humans solve problems?The logic theorist took them a year, but this problem took them 15 years.The final doctrine was published in 1972, and it has been the basis of work in this field ever since.

Their main working method, according to Simon's autobiography, is the group discussion of two people.This involves inductive and deductive reasoning, analogical and metaphorical thinking, and imaginative gallops—in short, any kind of reasoning, rational or irrational: From 1955 to the early 1960s, when we met every day... [we] worked primarily through conversation.Alan probably said more than I did.That is certainly the case now.I think that's how things have always been.However, we have certain rules in our conversations, that is, a person can talk nonsense, without reason, or in a vague way, but criticism is not allowed unless you are prepared to speak more accurately and with more reason.Some of the stuff we talked about made some sense, some of it had a little truth, and some of it was just bullshit, and we talked about it like that, and listened, and talked about it again and again.

They also performed a series of laboratory work.Whether doing it alone or together, they will record and analyze some steps, write down the steps they or others take to solve the problem, and then write these steps as a program.A favorite puzzle they've been using for years is a kid's immobile toy called "The Tower of Hanoi."If it is the simplest, it is composed of three discs of different sizes (with holes in the middle), and there are three solid rods on the flat base, and the disc is stacked on one of the three rods.Start with the largest disc on the bottom, the medium disc in the middle, and the smallest disc on top.The problem is to move one at a time in the fewest steps possible, without placing any disc on top of another disc smaller than it, until they are all stacked in the same order on another vertical bar. The perfect solution only needs 7 steps. However, since moving the wrong step will cause a dead solution, you have to go back and start again, which requires many steps.In more advanced versions, this solution requires complex strategies and many steps.A 5-disc game requires 31 moves, a 7-disc game requires 127 moves, and so on.Simon once said very seriously, "The Tower of Hanoi is as important to cognitive science as the fruit fly is to modern genetics—it is a standard research environment of inestimable value." (Occasionally, however, he attributes that honor to chess.) Another experimental tool used by the group is cryptographic arithmetic, in which numbers are replaced by letters in a simple addition problem.The goal is to find out which numbers these letters represent.Here's a simpler example from Simon and Newell: SEND (send) MORE (many) ——————————— MONEY (money) The first step is obvious: M must be 1, because any two-digit number—in this case, S+M—could not add up to 19, even with carry.Simon and Newell asked the volunteers to read aloud while solving the problem, write down everything they said, and then weave the steps of their thought process into the map, showing a step-by-step search trajectory, more than one The decision of the intersection when choosing, some wrong choices towards the dead solution, going back from the last intersection to try another approach, etc. Simon and Newell specifically took advantage of chess, a complex problem much harder than the Tower of Hanoi or cryptographic arithmetic.In a typical chess game of 60 moves, each move has an average of 30 possible moves; "seeing" just three moves first means seeing 27,000 possibilities.The question Simon and Newell wanted to understand was how chess players deal with such large numbers of possibilities.The answer is: An experienced chess player does not consider all the possible steps that he may take next, or that his opponent may take, but only considers a few moves that are meaningful and in line with basic common sense, such as "protecting King", "Don't give up children because of very low value" and so on.In short, chess players perform a heuristic search—a search guided by broad, chess-appropriate strategic principles—rather than a holistic but disorganized search. Newell and Simon's problem-solving theory took them another 15 years, and Newell's name always appeared first in their joint publications because of the alphabetical order.Their doctrine is that problem solving is the pursuit of a pathway from a starting state to a goal.To achieve this goal, the solver has to go through the problem space consisting of all possible states he can reach and find a pathway through all steps that satisfy the pathway constraints (conditions of the rules or domains). In such pursuits, the possibilities usually increase geometrically, because each decision point provides two or more possibilities, and there are several decision points below the possibility, thus providing another set of possibilities .In the 60 moves of an ordinary chess game, as already stated, each move has an average of 30 possibilities; the total number of channels in a game is 3060 to 30 million million cubic Million cubic million million cubic million million cubic million million cubic meters - a number completely beyond human comprehension.Correspondingly, as Simon and Newell's work demonstrates, when problem solvers find their way through their problem space, they do not look for every possible way. In their voluminous work, published in 1972 and appropriately titled The Solution of Human Problems, Newell and Simon set out what they consider to be general features.Including: — Because of short-term memory limitations, we search the problem space in a serial fashion, solving one problem at a time. —However, we do not perform a serial search for each possibility one by one.We only use this method when there are many possibilities. (For example, if you don't know which of a small bunch of keys will open a friend's door, you have to try one at a time.) — In many problem-solving situations, trial-and-error methods are not feasible, so we are forced to perform heuristic searches.Knowledge makes this very effective.Solving such a simple problem as a reversed letter of eight letters, such as SPLOMBER, may take 56 working days, if you write down all 40,320 permutations every 5 seconds, but, big Most people can solve this problem in seconds or minutes, because they rule out invalid combinations (eg, PB or PM) and only consider effective combinations (SL, PR, etc.). - A common and important heuristic simplification is what Newell and Simon call the "best to start from scratch" approach.At any intersection or "decision tree fork" in the search paths, we must first try the one that is likely to bring us closest to the goal.Trying to get closer to the target with every step is very effective (although sometimes we have to move away from it in order to get around an obstacle.) —Another complementary and more important heuristic is "median tail analysis," which Simon calls "the horsepower of GPS (Global Problem Solving)."The median tail analysis is a mixture of forward and backward analysis.Unlike chess, which only seeks forward steps, in many cases the problem solver knows that he cannot go directly to the goal, but has to fall back, first to the subgoal, and then from the goal to the larger goal, perhaps, He has to return to an earlier sub-goal, or an earlier and earlier sub-goal. In a recent review of problem-solving theory, Keith Holyoke provided a poor example of median tail analysis.Your goal is to repaint the living room.The nearest subgoal is the condition for you to be able to do the painting operation, but this requires you to have paint and brushes, so you must first reach the subgoal to buy these supplies.To do this, you must first achieve the subgoal of reaching the hardware store.And so on and on until you've fully planned to go from your current state to having a painted living room. An achievement such as Newell's and Simon's theory of solving, as great as it is, uses only deductive reasoning.Furthermore, it only allows for "poor knowledge" problem solving: only for mazes, games, and abstract problems.How well this approach describes problem solving in knowledge-rich domains, such as science, business, or law, is less clear. Thus, over the past two decades, a series of researchers have broadened the investigation of inference.Some have studied some of the mental dispositions on which deductive and inductive reasoning are based; others have studied both forms of reasoning, and still others have studied how we are in our everyday reasoning.Some have studied the differences in the reasoning performed by experts and novices in knowledge-rich domains.These investigations have borne fruitful results, shining a light on the invisible field of work of human reasoning.Here are some typical examples: Deductive reasoning: The traditional idea dating back to the time of Aristotle holds that there are two forms of reasoning, deductive and inductive.Deduction is the drawing of further beliefs from already given beliefs, that is, if the premises are true, the conclusion should also be true, since the conclusion is necessarily contained in the premises.From the premises of Aristotle's classic syllogism: All people are born and die. Socrates is a man. We must conclude that: Socrates lived and died. This kind of reasoning is very rigorous, strong, easy to understand, and very convincing.It is proved by the axioms of logic and geometry. However, many syllogisms with only two premises and three paragraphs are not so obvious; some are difficult to understand, and most people cannot draw a valid conclusion from them.Philip Johnson-Laird, who has studied deductive psychology, cites an example he used in his laboratory.Imagine a house with some archaeologists, biologists, and chess players, and consider that the following two statements are true: None of the archaeologists here are biologists. All biologists are chess players. What can be drawn from these two premises?Johnson-Laird found that very few people could give the right answer. (The only correct deduction is that some chess players are not archaeologists.) Why not?He believes that the ease of drawing valid conclusions from the Socratic syllogism above and the difficulty of drawing valid conclusions from the Archaeologists syllogism above is due to the way these inferences manifest themselves in the mind—the "mental schema" from which we create "The way. Some people with a formal training in logic usually imagine the problem in the form of a geometric figure in which the two premises can be replaced by circles, one inside the other, or superimposed on each other, or separated into a whole.However, Johnson-Laird's theory is based on his research and verified by computer simulations.He believes that people who have not received this training are using a simpler model.In the Socratic syllogisms, they unconsciously imagine a group of people, all mortal, and Socrates is also related to this group, and thus are ready to find any exceptions (exceptions that can exceed this group, and it may be Socrates). Grates).Since there is no such possibility, they correctly conclude that Socrates is alive and dead. However, in the syllogisms of archaeologists, they first imagine and try the first, then the second, and finally the third model, which gets harder the further back (we omit the details here).Some people rely on the first, not being able to see that the second makes it ineffective, others rely on the second, and don't see that the third and most difficult one makes it impossible, which leads to the only answer channel. Mental models are not the only source of misinterpretation.Experiments have shown that when the form of a syllogism is simple and its mental model is easy to establish, some people are also susceptible to being misled by their own ideas and information.A research group asked a group of subjects whether the following syllogisms were logically correct: Everything with a motor needs oil. Cars need oil. Therefore, a car has an engine. Everything with a motor needs oil. Oprobanine needs oil. So Oplobin has an engine. More people think that the first inference is logically correct than the second, even though the two inferences are structurally identical except that they are written in "Oploban The meaningless word "cause" replaced "car".They are misled by their knowledge of automobiles; they know that the conclusion of the first syllogism is true, and therefore consider this inference to be logically correct.However, this inference is incorrect, as they saw in the case of oplobine, a word they knew nothing about, which they could recognize There is no necessary overlap between engine stuff. Inductive reasoning: In contrast, inductive reasoning is a little looser and not very rigorous.It moves from specific ideas to broader concepts, that is, from limited situations to generalizations.From "Socrates is mortal", "Aristotle is mortal" and other examples, deduced according to one's own varying degrees of confidence in the case, "all men are mortal", although Even a single exception would invalidate that conclusion. Much of the important human reasoning is of this type.Both categorization and concept formation, which are central to thinking, are outcomes of inductive reasoning, as we know from our study of how children form categories and conceptual abilities.All of man's advanced knowledge of the world—from the inevitability of death to the laws of the motion of the planets and the formation of galaxies—is the product of generalizations drawn from a multitude of concrete examples. The inductive reasoning used in pattern recognition is also key to problem solving.There is a simple example: What's the next number? 23569101415—— A 10-year-old child can also answer this question after looking at it; an adult can see the pattern and answer within a minute or so (20).It is this process of reasoning that economists, public health officials, telephone system designers, and many others who do the pattern-spotting work that is crucial to the survival of our modern society draw on. Disturbingly, however, the researchers found that many people do not draw deductive inferences from incoming information.We often only notice some things that support existing ideas and store them in memory, while ignoring the opposite.Psychologists call this phenomenon "confirmation bias."Dane Russell and Warren Jones had subjects read material on ESP, some deterministic and some negative.Afterwards, Russell and Jones put their recall to the test.People who believe in the existence of extrasensory perception remember 100% of the deterministic material, but only 39% of the negative material.Skeptics can remember up to 90 percent of the material on both sides.Many similar bias studies have found that people with strong prejudices or racial prejudices draw conclusions from negative information about what they hate or don't believe in, or forget about any supporting material about them. Probabilistic reasoning: The ability of human thinking is the crystallization of evolutionary selection. However, we have lived in a high-level civilized society for too short a time to develop an innate ability to reason rigorously about statistical probabilities. This ability is needed. Daniel Kahneman and Amos Tversky, both of whom have done extensive work in this area, asked a group of subjects which they preferred: $80 for sure, or eighty-five percent If there is a 15% chance of getting $100, of course there is a 15% chance of getting nothing.Most people are willing to take $80, even though the statistical average risk is $85.Kahneman and Tversky concluded that people are generally "risk-averse": they prefer to get certainty, even if a risky project is more worthwhile betting on. Let's go back to the positive situation.Kahneman and Tversky asked another group of people whether they would prefer to pay $80 for sure, or $100 with an 85 percent chance and a 15 percent chance of course. Not a penny.This time, most people would rather take the gamble than pay it off, even though the gamble is more expensive on average.Kahneman and Tversky concluded that when choosing between gains, people are risk-averse; when choosing between losses, people seek opportunities to take risks—in both cases, They are all likely to make mistakes in judgment. A later finding was even more striking when they gave a group of college students a choice between two versions of a public health problem.The two approaches are mathematically equivalent but worded differently.The first version is: Suppose the United States is preparing to defend against an outbreak of a rare Asian disease that will kill an estimated 600 people.Two options have been proposed to deal with the disease.Hypothetical, accurate scientific estimates of the consequences of these scenarios are as follows: If plan A is adopted, 200 people may be saved; If plan B is adopted, there is a one-third possibility that all 600 people will be rescued, and there is a two-thirds possibility that none of the 600 people will be saved. Which option do you prefer? The second version of the story is the same as the previous one, with slightly different wording: If plan C is adopted, 400 people will die. If Plan D is adopted, there is a one-third chance that no one will die.But there's a two-thirds chance that all 600 will die. Subjects' responses to the two versions of the question varied dramatically: 72 percent chose Option A over Option B, but 78 percent (another group) chose Option D over Option B. Not Plan C.Kahneman and Tversky's interpretation: In the first edition, outcomes are described in terms of gains (lives saved), in the second edition they are described in terms of losses (lives lost).This is the same bias as the money experiment above, where the subjects' judgments are distorted, the same for lives at stake and money at the table. We use poor judgment in these situations because the factors involved are "unintuitive"; our minds are unwilling to grasp the reality of probabilities.This shortcoming affects both individuals and society as a whole.Voters and voter representatives often make costly decisions because of poor probabilistic reasoning.As Richard Nisbet and Lee Roth argue in their book Human Reasoning, many government actions and policies adopted in times of crisis are seen as beneficial because of what happened afterwards, although These policies are often useless or harmful.Misjudgment is caused by the human tendency to attribute a result to the action that produced it, though these results are often the result of the natural progression of things, the natural tendency from abnormality to normality. Reasoning by analogy: By the late 1970s, cognitive psychologists had begun to recognize that much of what logicians considered fallacious reasoning was actually "natural" or "workable" reasoning—imprecise, imprecise, Intuitive and technically ineffective, but often appropriate and effective. One such way of thinking is analogy.Whenever we realize that one problem is analogous to another, that is, one we are all familiar with and know the answer to, we jump right to the conclusion.For example, when assembling a piece of scattered furniture or machine parts, many people do not read the instruction manual at all, but directly do it by "feeling" - looking for the relationship between the parts, and looking for them between different furniture or machine parts. A resemblance of something that has been assembled before. Analogical reasoning is formed in the later stages of children's psychological development.Cognitive psychologist Didell Kintner, who has recently been doing research on analogical thinking, asked five-year-olds and adults in what ways are clouds and sponges alike?Children responded with similar characteristics (“They are all round and fluffy”), while adults responded with related similarities (“They all absorb water and can squeeze it out.”) Kintner sees reasoning by analogy as a high-level relationship between one domain and another. He says: In my opinion, none of these programs can match the complexity of the human thought process. "AI" programs, unlike humans, tend to be single-minded, impossible to distract, and devoid of emotion.Again, they are generally equipped from the outset with all the cognitive material needed to solve a problem. Yet this man, no less authoritative than Herbert Simon, asserted categorically that minds and machines are alike. In 1969, in a series of lectures collected in The Science of Artificial Intelligence, he proposed that computers and human minds are "symbolic systems"—capable of processing, transforming, precise, and generally manipulating a variety of The physical existence of the symbol. Throughout the 1970s, a handful of dedicated psychologists and computer scientists at MIT, Carnegie-Mellen, Stanford, and other universities believed fervently that they were on the verge of a colossal breakthrough , thus developing some programs that can not only explain the working principle of thinking, but also the machine replica of human thinking.By the early 1980s, this work had expanded to include laboratories at several universities and some large companies.These programs can perform activities as diverse as playing chess, syntactically analyzing sentences, translating some basic sentences from one language to another, and inferring molecular structures from large amounts of spectral data. Enthusiasts believe that there is no limit to the ability of information processing to explain how the mind works, and that there is no limit to the ability of artificial intelligence to detect these explanations by performing the same processes, and they believe that these programs will eventually do better than humans. In 1981, Robert Jastrow of the Goddard Institute for Space Studies predicted, "By 1995 or so, according to current trends, we will see the sudden emergence of life forms with silicon brains, which will compete with humans. Let's compete." But, like Reiser, some psychologists feel that computers are mere mechanical simulations of certain aspects of the mind, and that computational models of mental processes are only a poor aspect.Reiser himself was "very disillusioned" with information-processing models by 1976, when he published his second book, Cognition and Reality.Reisel was heavily influenced by James Gibson and his "ecological" psychology, in which he argued that information-processing models were too narrow and too disconnected from real-life perception, cognition, and purposeful activity. far, and fail to take into account the experience and information we are constantly absorbing from the world around us. Other psychologists, while not saying they are deeply disappointed, have found ways to broaden the view of information processing to include the mind's use of outlines, shortcuts, and intuitions, and their parallel simulations at both conscious and unconscious levels. Process capability (this is a key topic, we will come to this later). Still others have challenged that some computers programmed to think like humans aren't thinking at all.They say that artificial intelligence has nothing to do with human intelligence. Although it may be far superior to human thinking in computing, it will never be able to easily, or at all, perform the daily tasks of human thinking effortlessly. Can get the job done. The most important difference is that a computer cannot understand what it is thinking.John Searle and Hubert Dreyfuss, both philosophers at Berkeley, as well as MIT computer scientist Joseph Weissenbaum and others, argued that computers reason as programmed When working, only manipulating the symbols without any understanding of what those symbols mean or what they mean.For example, a global problem solver might be able to figure out how a father and two children get across a river, but they can only do this in algebraic notation; What it means, what will happen to them after they sink into the water, and don't know anything in this real world. These programs, which are designed to assist people in their problem-solving work, generally ask the people who operate the programs in English, using the answers and their own stored knowledge to move in a reasoning decision mode, to walk away from the dead center, to put The search narrows down to a conclusion, to which they assign a ratio ("Diagnosis: Lupus; Reliability: 0.8").By the mid-1980s, dozens of such programs were in daily use in scientific laboratories, government departments, and factories, and by the end of the 1980s, the number had reached hundreds. 然而,虽然专家系统的聪明之处是一些银行计算机、航空订票处的计算机以及其它一些场合的计算机所不具备的,但是,在现实中,它们不知道它们所处理的现实世界信息的意义,不是我们了解的那一种。卡杜塞斯是一种内科咨询系统,它可以诊断五百种疾病,诊断效果与高级医疗人员可以说相差无几,可是,一本权威的教科书,《建立专家系统》却说,它“对所涉及的基本病理生理学过程一无所知”,也不能思考一些处在它的专业知识以外,或者处在其周围的医学问题,哪怕只需要最普通的常识也不行。一种医学诊断程序在一位用户问及羊水诊断是否有用时也不能够提出反对意见;这位病人是位男士,而系统却不能够“意识”到这是个荒谬的问题。如约翰·安德森所言:“人类专家能够很好地解决的一些难题就是了解可以利用知识的环境。一台逻辑发动机只有在环境被仔细地规定好了以后才会得出合适的结果。”可是,为了像人类那样广泛而丰富地确定环境,将需要无法想象的数据和编程工作量。 除了其它一些反对人工智能会思想的论断的说法以外,还有下面这些意见,它们是由许多心理学家和其它的科学家提出来的: ——人工智能程序,不管是专家系统型的,还是具有更广泛推理能力的程序,它们都没有对自我的感觉,也不知道它们自己处在这个世界里的位置的感觉。这就严重地限制了他们进行现实世界思考的能力。 ——他们不能,至少目前不能直觉地,或者大致地推理,也不能创造性地思想。有些程序的确能够生成新的办法来解决一些技术问题,可是,这些只是对现存数据的重新组合。另外一些程序写出了诗歌,编出了音乐还画出了油画,可是,它们的产品并不能在艺术世界里留下痕迹;如约翰逊博士的经典说法,它们“就像是狗踮着脚走路。走得不太好,可是,你会很吃惊地发现,它竟然能走了。” ——最后,它们没有感情,也没有身体的感觉,尽管在人类当中,这些都会深刻地影响、指导而且还经常误导思维和决定。 尽管如此,信息处理的比喻和计算机都已经在人类推理能力的调查中发挥了至关重要的作用。信息处理模式已经产生了大量的实验、发现和有关以系列方式发生的认知过程的洞见。而信息处理学说可以建立在上面,并得以确立或否定的计算机已经成了无法估价的实验室工具。 然而,信息处理模式的缺点和人工智能模拟的局限都已经在过去的10年里,导致了认知革命的第二阶段的到来:即修改极大的信息处理范式的出现。它中心的概念是,尽管信息处理的串行模式适合认知的某些方面,但是,大多数——特别是更为复杂一些的心理过程——都是一种很不相同的模式,即并行处理的结果。 事有奇巧——也许可说是不同思想的互相滋润——这与最近的大脑研究结果十分相符。最新的大脑研究显示,在心理活动中,神经脉冲不是沿单向通道从一个神经元向另一个神经元前进的,它们是通过多种内部交流电路的同时激发而自发产生。大脑不是一个串行处理器,而是一台庞大的并行处理器。 与这些发展相匹配的是,计算机科学家们一直在创立一种新的计算机建筑模式,连锁和内部交流处理器可以并行工作,以极复杂的方式影响彼此的操作,可以比串行计算机更接近大脑和思维的运作。这种新的计算机建筑不是以大脑的神经元网络为模式的,因为它们当中的大多数仍然没有绘制成图,也太复杂了,复杂得无法复制,可是,它的确可以用自己的方式进行并行处理。 这三种发展的技术细节不在本身的范围之内。可是,它们的意义和重要性却是本书必须重视的。让我们来看看可以怎样利用这些东西。 一位法国数学家亨利·彭加勒1908年花了15天的时间想研究出法奇森函数理论,但没有成功。他接着放下工作进行一项地质探险活动。正当他上汽车与一位同行的旅行者谈话时,答案突然出现在他脑海里,非常清晰,毫不含糊,他甚至没有中止自己的谈话以便验证这个理论。当他后来去验证时,答案证明是正确的。 创造力的年鉴里满是这样的故事;这表明,思维可以同时进行两种(或者更多)思索,一种是有意识的,另一种是无意识的。传说不是科学证据,但是,在认知革命的早年,好多种对注意力进行的实验的确证明,思维不是一种单一的串行计算机。 这样的实验中最出名的一项是在1973年进行的。实验者詹姆斯·拉克纳和梅里尔·加勒特告诉受试者们戴上耳机,只注意左耳听到的东西,而不管右耳听到的内容。他们的左耳内听到的是一些含义模糊的句子,比如:“这位军官弄出火苗,示意进攻”;而同时,有些人在右耳却可以听到一个句子,可以清楚地解释一个模糊的句子,如果他们注意听的话。(“他把灯熄掉。”)而其它一些人听到的却是一些不相关的句子。(“红人队今夜要连赛两场。”) 事后,没有哪一组能够说出他们的右耳听到了什么。可是,当问及含义模糊的句子的意义时,那些用右耳听到不相关句子的人被分成两组了,一组是听到含义模糊的句子后说是扑灭火苗,另一组是听到句子后说是弄出火苗来。大多数听到过解释性句子的人都说是扑灭了火苗。很明显,解释性的句子被同时和无意识地与模糊的句子一起处理了。 这是好多理由中的一个理由,说明70年代为什么会有一些心理学家开始提出一种假设,说思维不是串行处理的。另一个原因是,串行处理不能解释大部分的人类认知过程,神经元太慢了。它是以毫秒进行操作的,因此,发生在一秒左右时间内的人类认知过程只能补偿不到100个串行步骤。很少有过程是如此简单的,而许多过程,包括知觉、回忆、语音读出、句子理解和“配对”(面孔辨认模式)在内,都要求大得多的数字。 到1980年左右,一系列心理学家、信息理论家、物理学家和其他一些人开始开发详细的并行处理系统工作模式的理论。这些理论特别专业,涉及高等数学、符号逻辑、计算机科学、概要理论和其它的神秘莫测的东西。可是,这场运动的领袖之一大卫·鲁麦哈特最近以简单的话,总结了鼓励他和15位同事开发出自己的“并行分配处理”(PDP)理论的那种思想: 尽管大脑的元件很慢,可它们的数量庞大。人脑装有数十亿这样的处理元件。它不是组织许多串行步骤的计算,如我们在一些步骤很快的系统中所看到的一样,人脑一定是在用许许多多的单元以协作和并行的方式执行它的活动。除开其它的以外,这些设计特性我相信会导致对计算的总体的组织,它与我们已经习惯的方式一定有很大的不同。 PDP还在对信息如何存储的解释上面与当时使用的计算机比喻有很大的不同。在计算机中,信息的存储是以其晶体管的状态保留下来的。每只晶体管要么是开着,要么是关闭的(代表0和1),一连串的0和1代表用符号表示的各种各样的信息的数字。当计算机运行时,电流保持这些状态和信息,当你关掉机器时,一切就会丢失。(依靠磁盘进行永久存储完全是另一码事;磁盘在操作系统之外,正如书面的记事薄处于大脑之外一样。)大脑不可能是按这种方式存储信息的。一方面,神经元不可能是开或闭的状态,它会从其它成千上万的神经元中增多输入,在到达一定量的激发时,会把一个脉冲传送到其它神经元中去。可是,它保持激发状态的时间不会超过几分之一秒,因此,只有很短时的记忆是通过神经元状态存储起来的。而且,由于记忆在大脑因为睡眠或者因为麻醉而处于无意识状态时不会丢失,事情一定是,大脑中的长期存储一定是以其它的某种方式获取的。 这个因为大脑研究而获得的新观点是,知识不是以神经元的状态而存储的,而是通过经验形成的神经元之间的连接形成的,或者,如果是机器,就是在一种并行分配处理器的“单元”之中。如鲁麦哈特所言: 几乎所有的知识都包含在执行任务装置的结构之中……它就装在这个处理器本身里面,直接决定处理的途径。它是通过对连接的调谐获取的,因为这些东西就在处理中使用,而不是作为说明性的事实形成和存储起来的。 这种新的理论相应地也就称作“连接主义”,这是当前认知学说中第一号新词。过世的艾伦·纽厄尔不久前说,连接主义者认为他们的学说是认知心理学的新范式,他们的运动是第二次认知革命。 鲁麦哈特和两位同事划的一张图可以使PDA学说更清楚明白一些,如果你愿意花几分种时间分析一下的话。它不是大脑某块组织的细图,可是理论化的连接主义者所认为的网络图的一部分: 连接主义者所认为的网络假想图例: 第1到第4单元接受外部世界的输入(或者这个网络的其它部分),加入来自第5到8单元输出的反馈。这些单元之间的连接是由没有标上数字的圆圈象征性地指示出来的:打开的圆圈越大,连接越强,填满的圆圈越大,受抑制越强,传递的干扰就越大。因此,第1单元不影响第8单元,但会影响第5,6和7单元,影响的程度各个不同。第2,3或者4单元都影响第8单元,影响的程度很不相同,而第8单元反过来也向输入的单元发出反馈,对第1单元的影响几乎没有,对第3和4单元的影响很小,对第2单元的影响极大。所有这些都是同时进行的,并得出一个输出排列,与信号过程和并口设计中的信号输出形成对照。 尽管鲁麦哈特及其同事说,“PDP模式的吸引力毫无疑问会因为其生理可行性和神经灵感而得到极大的加强”,但是,图中的单元不是神经元,其连接也不是突触连接。这个图代表的不是一种生理的存在,而只是里面发生的事情;大脑的突触和这个模式的连接是以不同方式运作的,禁止某些连接,而同时又加强另外一些连接。在两种情况下,这些连接是这个系统知道的东西,也是它对任何输入作出的反应。 这里有一个简单的图示:在这幅图中,被墨迹部分盖住的是什么字母? 你可能立即会说,被盖住的这个字是RED(红色)。可是,你怎么知道的?盖住的每个字母都有可能是别的字母,而不是你所认为的那一个。 鲁麦哈特和杰伊·麦克莱兰德对你的猜技是这样解释的。第一个字母里面的竖线是输入你的认知系统的一个输入,它与存储着R,K和其它字母的那个单元有很强的联系;斜线连接着R,K和X。另一方面,看见这些线条中的每一根并没有跟——人们也可以说禁止跟——代表圆角字母如C或者O的单元连接起来。同时,你从第二个字母中看到的东西与登记着F和E的单元有强烈的联系,因为经验已经确立了RE但没有把RF当作一个英语单词的开始。and so on.许多连接都在同时并行操作,它们使你能够立即看到RED这个词,而不是任何别的词。 在更大的一个范围里来说,信息处理的连接主义模式与认知心理学研究中其它开创性发现的成果十分吻合。比如,我们可以考虑一下图39中的语义记忆力网络中已知的东西。网络中的每一个结点——比如,“鸟”、“金丝鸟”和“歌唱”,都对应于某个连接主义模块,有点像最后一个图中全盘的排列,但也许是由成千上万个单元而不是这八个单元构成的。想象一下,足够多的该类单元模块会登记下存储于大脑中的所有知识,每个模块都与相关的模块有好几百万种连接,而且……可是,这种任务对于想象来说的确是太浩大的一个工程。连接主义者的思维建筑不再有可能把它整个的图景像表现宇宙结构一样表现出来。 连接主义模式是对实际大脑结构和功能的强烈类比。弗朗西斯·克里克曾因与人共同发现了DNA结构而分享了诺贝尔奖,现在又在索尔克研究院研究处于前沿阵地的神经科学,他说,大脑的概念作为一个复杂的大型并列处理器层次结构,“几乎可以肯定地说是沿着右边的线路前进的。”保尔·切尔奇兰和帕特里夏·切尔奇兰都是认知科学中的哲学家,他们总结当前的大脑结构知识时说,大脑的确是一个并行机器,“信号是同时在成百上千万不同的通道中进行处理的”。神经元的每一种集合都会向其它集合发送成百上千万的信号,并从这里接受返回信号,用以修正其这种或那种输出。正是这些反复不断的连接模式才“使大脑成了一台真正充满动力的系统,它连续不断的行为既十分复杂,而在某种程度上又不依赖于其周边的刺激”。因此,笛卡儿才有可能整个早晨躺在床上胡思乱想,正如许多心理学家后来也如法炮制的一样。 也许,最了不起的发展是计算机与思维之间的关系的变化。一代人之前,好像是说计算机是一种模式,通过它,推理的思维可以被理解。现在,这个秩序反过来了。会推理的思维是一个模式,通过这个模式,更聪明的计算机就可以建成了。最近几年,计算机工程师们一直在设计和建造并行计算机,其线路的连接将会使64000个处理单元同时操作,并彼此发生影响。同时,人工智能研究者也在编写程序,使其能模拟小型神经网络的并行处理,这种模拟相对于约1000个神经元。他们的目的是多重的:要创造比基于串行处理更接近聪明一些的智能程序,要编写出能模拟假设的心理过程的程序,这样,它们就可以在计算机上进行测试。 这是一个很好的嘲讽:使思维成为可能的大脑到头来成了一种机器的模型,而这种机器一向被认为比大脑聪明一些,这个模型是如此复杂,如此繁锁,以致于目前只有计算机才能干好这件事,只有计算机才能处理对它进行的微型模拟。 如最伟大的的赞美诗作者大卫在25个世纪以前,在认知革命和计算机时代之前所赞叹的:“我要称赞您;因为我是在惶恐中诞生,我乃天赐而成。”
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book