Home Categories social psychology Say No to "Pseudo-Psychology"

Chapter 15 Chapter 11 The Role of Chance in Psychology

In the last chapter we discussed the importance of probabilistic trends, probabilistic thinking, and statistical reasoning.This chapter will continue this topic, emphasizing the problems people encounter in understanding the concepts of randomness and chance.We will emphasize that the contribution of research to clinical practice is often misunderstood by failing to appreciate how contingency runs throughout psychological theory. Our brains have always evolved in such a way that we are relentlessly seeking patterns in the world.We seek relationships, explanations, and meaning behind things around us.Psychologists have studied this strong tendency.This is a typical feature of human intelligence and explains the astonishing number of human abilities in information processing and knowledge acquisition.

However, this remarkably survival-adaptive nature of human cognitive processes can sometimes be reversed.For example, there is nothing in the environment that can be conceptualized, but we still blindly seek conceptual understanding. This is a maladaptation.So what, exactly, is causing trouble in this most distinctive aspect of human cognition?What disrupts our search for structure and blocks our understanding of things?You guessed it, probability.More specifically, chance and randomness. Chance and randomness are an integral part of our surroundings.The laws of chance and randomness govern the mechanism of biological evolution and genetic recombination, and physics also uses the statistical laws of chance to explain the basic structure of matter.Many things that happen in nature are the result of the interaction of systematic and explainable factors and accidental factors.Recall the example discussed earlier: smoking causes lung cancer.Systematic, explainable aspects of biology linking smoking to a disease do not imply that all smokers will develop lung cancer, the trend is probabilistic.Perhaps eventually we will be able to explain why some smokers do not develop lung cancer, but at this stage, this variability must be attributed to the large number of chance factors that determine whether a person develops a certain disease or not.

This example shows that when a thing depends on chance, it does not necessarily mean that it is indeterminate, only that it is currently indeterminate.A coin toss is an accident, but that doesn't mean it's impossible to determine its outcome after measuring the angle of the toss, the metal content of the coin, and many other variables.In fact, these variables do determine the outcome of the coin toss.However, we call coin tossing a random event because we have no easy and quick way to measure these variables at each toss.The outcome of a toss is not strictly indeterminate, it is just uncertain at the moment.

Many events in the world cannot be fully explained in terms of systemic factors, at least not yet.However, when there is no ready-made systematic explanation for a particular phenomenon, the concept-seeking "device" in our heads is often still humming along, trying to impose meaningless theories on otherwise random data.Psychologists have conducted experimental research on this phenomenon.In one experimental situation, subjects were asked to observe a series of stimuli that differed in multiple dimensions, and were told that some of the stimuli belonged to one type and others belonged to another, and the task of the subjects was to Decide which of these two categories each stimulus falls into.In fact, the stimuli were randomly assigned by the researchers, so there was no regularity other than randomness.However, subjects rarely dared to make random guesses.Instead, they usually rack their brains to invent a complex taxonomy and explain how they categorized the stimuli.

Similarly, "conspiracy theories" of all kinds usually require one set of complex rhetoric after another to explain the events that were originally caused by random factors that the conspiracy theorists are desperately trying to understand.This phenomenon is typical even of the work of various authority figures in their fields of expertise.The way many financial analysts think embodies this fallacy.They usually concoct elaborate explanations for every small fluctuation in stock market prices, when in reality the variation is mostly just random fluctuations (Malldel, 2004; Taleb, 2001).However, stock market analysts are constantly implying to clients that they can (and perhaps believe they can) "conquer the market," even when overwhelming evidence shows that most of them cannot.Over the past few decades, if you bought all 500 stocks in the S&P index and then left it alone (what we call the "dumb strategy" approach--buy a mutual fund), then your returns today will be higher than what 2/3 of Wall Street stockbrokers earn for their clients (Egan, 2005; Hulbert, 2006; Malkiel, 2004; Updegrave, 1995), and your performance will would beat 80% of financial newsletters whose subscription fees have risen to $500 a year (Kim, 1994).

But what do we make of those brokers who actually beat the fool's strategy?You might wonder if that means they have some special talent.We answer this question by imagining such an experiment: there are 100 monkeys, and each monkey holds 10 darts in his hand. They all throw darts at a wall with the S&P 500 index written on it. stock to buy.So what will their performance look like a year from now?How many monkeys can beat the S&P 500?Congratulations, you got it.About half of the monkeys will.So, would you pay half of the monkeys who beat the S&P 500 to pick stocks for you next year?

An extension of this example of financial forecasting demonstrates the logic by which otherwise purely random events can appear to be caused by predictable factors (Fridson, 1993; Paolos, 1988).Suppose you get a letter telling you of a newsletter about stock market forecasts.There is no charge for this newsletter, just that you try to buy stocks according to their recommendations, and then see if its predictions work.It tells you that IBM's stock will climb higher in the next month.You toss the newsletter away, but you do notice that IBM stock does go up in the next month.If you've ever read a book similar to this one, you'll think it's a common occurrence, and you'll see it as a fluke guess.Then you get another communication from the same investment advisory firm saying that IBM stock is going to go down the next month, and when it does go down, you still take it as a fluke, but this time you might I'm a little curious.When the company sends a third newsletter predicting that IBM will fall again next month, you find yourself paying more attention to those few pages of financial content.Then you find out that the newsletter was right again, and IBM did drop again this month.When the fourth newsletter from the company says that IBM will go up next month, and it did, you can't help but think that this newsletter is really good, and you can't help but want to pay $29.95 for a year to subscribe to this powerful book. value newsletter.The temptation is too great to resist, unless you can imagine that at this moment, in a rough basement, someone is preparing 1,600 newsletters to be sent out next week, to 1,600 addresses in the telephone yellow pages, 800 of which are IBM was forecast to rise next month, while 800 forecasts fell.When IBM does go up the next month, the company continues to send the newsletter only to the 800 "clients" who received correct forecasts in the previous month (of which, of course, there are still 400 forecasts up and the other 400 forecasts down).Then, as you can imagine, this "boiler room"—and possibly the telemarketing hustlers fanning the flames in the background—is sending out a third-month forecast newsletter ( Still 200 forecasts up, 200 forecasts down).Yes, you are one of the 100 lucky ones who received four correct random prediction messages in a row!Most of the 100 "lucky ones" will pay $29.95 to continue receiving the newsletter.

Now it looks like a terrible scam playing on everyone.The same is true.And when those "respected" financial magazines or TV shows recommend you "stockbrokers who beat more than half their opponents four years in a row," it's not much better.Think back to the monkeys throwing darts, and imagine the monkeys are stockbrokers picking stocks year after year.Obviously, 50% of them will beat their opponents in the first year.The next year, half of that 50% again—on a random level—would beat their opponents, or 25% of agents who beat their opponents two years in a row.Then in the third year another half—random level—could beat the opponent, or 12.5% ​​of the total number of people who beat the opponent for three consecutive years.Finally, in the fourth year, half of these people (6.25% of the total number) can beat their opponents.Therefore, about 6 out of 100 monkeys can only achieve the impressive results that financial programs and newspapers say "beat other brokers for four consecutive years".These six monkeys, then, who beat their fellow dart-throwing fellows (and, as you can see, most real-life Wall Street brokers; see Egan, 2005; Malkiel, 2004) did indeed qualify for the TV show "Wall Street The Week", what do you think?

People have a tendency to explain chance events, a phenomenon known in the study of psychology as illusory correlation.When people believe that two types of events should normally occur together, they think they see the co-occurrence frequently, even when the co-occurrence of the two types of events is random, no more than any other two events co-occurring The same is true for higher frequencies.In summary, even when faced with random events, people tend to see the connections they expect (Nisbett & Ross, 1980; Stanovich, 1999, 2004).They see regularity where there was no regularity. Many controlled studies (such as King & Koehler, 2000; Stanovich & West, 1998) have proved that when people have already preset the idea that two variables are related, they can even find that the two variables have no relationship at all. links found in the data.Unfortunately, this discovery also widely exists in real life and has a negative impact on people's lives.For example, many people who work in psychotherapy have long believed in the validity of the Rorschach inkblot test.The famous inkblot test requires participants to respond to inkblots on a blank sheet of paper.Because the blots lack structure, the theory is that people respond to them with their own typical responses to ambiguous situations, revealing their "hidden" psychological qualities.This test is also known as a projection test because it assumes that the subjects will project their subconscious inner activities and feelings onto the inkblot.The problem, however, is that there is no evidence that the Rorschach test provides any additional diagnostic value when used as a projective test (Garb, Florio, & Grove, 1998; Lilienfeld, 1999; Lilienfeld et al., 2000; Wood, Nezworski, & Stejskal, 1996; Wood, Nezworski, Lilienfeld, & Garb, 2003).Confidence in the Rorschach test stems from the phenomenon of delusional correlations.Clinical psychologists see a connection in a patient's response patterns because they believe there is one, not because they actually observe a connection in the response patterns.

Psychologist Ray Hyman discusses people's tendency to look for patterns where there are none: Many of the relationships in our lives involve a lot of chance: "Dating a man and a woman who didn't know each other ended up leading to a marriage; canceling an appointment and losing a job; missing a bus and meeting an old high school classmate, etc. It is a mistake to think that every small accidental event in life needs elaborate explanation. However, when accidental events do have important consequences, people cannot help constructing some complicated theories to explain them. The tendency to try to explain accidental events may stem from our deep desire to believe that we can control these events.Psychologist Ellen Langer has studied the phenomenon of the illusion of control, referring to people's tendency to believe that personal abilities can influence the outcome of contingent events.In one study, employees at two different companies sold lottery tickets to colleagues, some simply slipping them into their hands, while others could draw them themselves.Of course, in the event of a random lottery draw, there is no difference whether the lottery ticket is drawn by itself or distributed, and the winning rate is the same.However, the next day, when the two employees tried to buy the lottery tickets back from these colleagues, the subjects who drew their own lottery tickets charged four times as much for the lottery tickets as those who were distributed!In several other experimental studies, Langer confirmed this hypothesis, and these results arise because people cannot accept the fact that personal factors cannot affect chance events.Evidence for the widespread existence of this illusion comes from the experience of state lotteries in the United States.These states are filled with pseudoscience books teaching people how to "conquer" the lottery.The reason these books sell well is because people don't understand what randomness means.In fact, since New Jersey invented a new way to sell lottery tickets in the mid-1970s, the state of the United States has only exploded in buying lottery tickets.This method is to allow buyers to scratch or pick their own numbers (Clotfelter & Cook, 1989; Thaler, 1992, p.138).Sweepstakes conducted in this way are often called "participatory lotteries", and this type of participatory lottery uses the illusion of control studied by Lange at the time: people mistakenly believe that their participation behavior can determine randomness. event.

Some psychologists have studied a related phenomenon called the fair world hypothesis, which refers to people's tendency to believe that they live in a fair world where everyone gets what they want. What is deserved (Hkfer & Begue, 2005).Researchers have found experimental evidence for a belief in "just punishment" in a fair world: People look down on casual victims of misfortune.The tendency to seek explanations for chance events leads to this phenomenon.It is difficult to believe that a man of perfect or high moral character should suffer misfortune by chance.While we want to believe that good things happen to good people and bad things happen to bad people, chance is impartial and works in a completely different way: good and bad things happen to different people with equal probability. When the belief in a fair world assumption is pushed to extremes, it can lead to some very harmful or inhumane dogmas.Consider the logic of a U.S. Department of Education official in the early 1980s who said that people with disabilities "falsely believe that they are being punished by accident in their lives when they are not. There is nothing in this point which he himself did not bring about... This may sound unfair, but a man's external circumstances do correspond to the development of his inner soul" (Gilovich, 1991, p.l43).As Gilovich points out, "This really shouldn't be the philosophy of an official who wants to go to the top of the Ministry of Education, which is supposed to be the agency charged with giving equal educational opportunities to people with disabilities" (pp. 143-144 )—but if we refuse to ascribe such consequences to chance, the result must be this philosophy of inhumanity. The erroneous understanding of chance embodied in the fair-world assumption also feeds other false folk beliefs that make it easy to see spurious correlations.For example, we mentioned in Chapter 6 that "blind people have very acute hearing" is a false belief that may be perpetuated because the association reflects "heaven is fair," which Exactly what people want to see. In psychology, too, there is a tendency for researchers to try to explain everything, hoping that their theories will account for not only the systematic, nonrandom elements of behavior but also any subtle variations.This tendency has led to a proliferation of unfalsifiable psychological theories, both those proposed by individuals and those that appear to be scientific. Practitioners of "psychohistory" often make mistakes of this kind.Every small change and turning point in the life of a famous person is interpreted in psychological history by the theory of the psychoanalytic school.The problem with most psychohistorical events is not that they explain too little, but that they explain too much.Practitioners of this research method rarely admit that a person's life is determined by many accidental factors. It is important for the layman who wants to apply the knowledge of psychology to understand the role of chance.Formally trained psychologists accept that their theories account for some but not all of the variation in human behavior, and they are comfortable with chance.But the guest on The Oprah Show (see the beginning of Chapter 4) who can explain every case and every detail of human behavior arouses not admiration but suspicion.True scientists are not afraid to admit their ignorance.In short, another practical rule that the psychology of evaluation asserts is this: Before accepting a complex explanation of an event, consider what part chance has played in it. This tendency to seek explanations for events that are purely accidental also leads us to misunderstand the nature of many coincidental events.Many people think that coincidences require special explanations, and they don't understand that coincidences happen without factors other than chance, and that coincidences don't need special explanations. Webster's World Dictionary defines coincidence as: "The unexpected and inconceivable coincidence of related or identical events." Whereas the dictionary defines accident as "happening by chance" , so there is no problem with this definition.A coincidence is simply the coincidence of related events by chance.Unfortunately, many people do not interpret coincidence that way.Those tendencies to seek patterns and meanings in events combined with the "magical" quality of coincidences lead many people to forget that they can explain coincidences in terms of chance and instead seek ad hoc explanations for understanding the phenomenon.You've heard this story a million times: "I was sitting there thinking I hadn't called old Uncle Bill in Texas in ages, and the next thing the phone rang, you guessed it? What! It's my old Uncle Bill calling. There must be something behind this telepathy!" This is a classic example of concocting explanations for coincidental events.Every day, most of us probably think of many people, near or far, how many of these people are likely to call when we think of them?Almost impossible.In a year like this, we might think of hundreds of people who never called.Eventually, after hundreds of such "false attempts" without us realizing it, someone is about to call us just when we're missing him or her.This kind of thing is rare, but rare things happen—by sheer chance.Other explanations are superfluous. If people really understood the meaning of coincidence (an incredible event that happened by chance), they wouldn't fall into the trap of looking for systematic, non-accidental explanations.But the opposite is true. For many people, coincidences require reasons other than chance to explain them.For example, many people have heard the saying: "My God! What a coincidence! I really want to know why!" For this reason, Marks (2001) suggested that everyone use rare matching as a relatively neutral noun in the future to describe the simultaneous occurrence of two events that amazes us. The tendency to seek explanations for coincidental events is fueled by the false belief that rare things do not happen and that rare matches are not accidental.The reason why our false beliefs of this kind are so strong is that probability is sometimes expressed in terms of the word odds, which is a pun intended.Look at the way we say probability: "Oh my god, this is extremely unlikely to happen! Because it's only 1 in 100!" The way we say it makes There is a strong sense that this will never happen.Of course, we can say the same thing in another way, and this way may have an entirely different feeling: "In 100 similar events, this outcome may occur once." This expression The method emphasizes that although this event is rare, in the long run, rare things will eventually happen.In short, rare matches happen by chance. In fact, the laws of probability ensure that as the number of events increases, some rare matches become more likely to occur.This law not only allows rare matches to occur, but almost guarantees them in the long run.Looking at the example of Marks (2001), if you toss 5 coins at once and they all come up heads, you would consider this a rare match, an unlikely event.Yes, the probability of it happening is 1/32 or 0.03.But if you toss these 5 coins 100 times, what is the probability that all of them will come up heads at least once out of the 100 times?The answer is 0.96, which means that out of 100 times, this rare match is very likely to occur. A few years ago, Ayn Rand concocted a series of widely circulated "creepy" coincidences between Presidents Abraham Lincoln and John F. Kennedy: 1. Lincoln was elected president in 1860; Kennedy was elected in 1960. 2. Both Lincoln and Kennedy were concerned with civil rights. 3. Both the names Lincoln and Kennedy have 7 letters. 4. Lincoln had a secretary named Kennedy, and Kennedy had a secretary named Lincoln. 5. Both were succeeded by a southerner named Johnson. 6. Both were assassinated by men with three names (John Wilkes Booth and Lee Harvey Oswald). 7. Both Booth and Oswald held unpopular political views. 8. Booth shoots Lincoln in the theater and hides in the warehouse; Oswald shoots Kennedy from the warehouse and hides in the theater. Of course, as a coincidence, the connection between these events is not at all creepy.John Leavy, a computer programmer at the University of Texas (1992), once ran a "spine-chilling presidential coincidence contest" to show that between virtually any two presidents, find How easy it is to make a checklist of (see Dudley, 1998).For example, Levy's article compares William Henry Harrison and Zachary Taylor, Polk and Carter, Garfield and McKinley, Lincoln and Jackson, Nixon and Jefferson, Washington and Eisenhower, Grant and The parallels between Nixon, Madison and Wilson.Here are the striking similarities between Garfield and McKinley: 1. Both McKinley and Garfield grew up in Ohio. 2. Both McKinley and Garfield were veterans of the American Civil War. 3. Both McKinley and Garfield served in the House of Representatives. 4. Both McKinley and Garfield supported the gold standard and tariff protection in order to protect American industry. 5. Both the names McKinley and Garfield have 8 letters. 6. Both McKinley and Garfield were replaced by vice presidents from New York City: Theodore Roosevelt and Chester Aaron Arthur. 7. Roosevelt and Arthur both had 17 letters in their names. 8. Both vice presidents have beards. 9. Both McKinley and Garfield were shot in September of their first year in office. 10. Their assassins, Charles Kietou and Leon Giorgosh, neither sounded like Americans. Many lists of ties between presidents are similar.In short, considering the complexity of interpersonal interactions and various events in a person's decades of life, in such a sample space containing thousands of events, if there is no similarity between any two people, That's what makes people wonder (Martin, 1998). Knowing when to avoid concocting complex explanations for events caused by purely random factors has practical implications.Writer Aitul Gavandi has described the cognitive psychologist Kahneman's dealings with the Israeli Air Force during the 1973 Yom Kippur War.Two squadrons departed and returned, one with four aircraft lost and the other with no losses.The military wanted Kahneman to investigate whether there were special factors at work for the discrepancy.Instead of investigating, Kahneman simply used the ideas from this chapter to tell the Israeli Air Force not to waste time: "Kahneman knew that if Air Force officials did investigate, they would inevitably end up in two squadrons. found some measurable difference between them and felt compelled to do something” (Gawande, 1999, p. 37).But Kahneman knew that any factors found were highly likely to be spurious—the result of purely fluctuating chance. Rare matches that occur in our personal lives often hold special meaning to us, and we are especially reluctant to attribute them to chance.There are many reasons for this tendency, some are motivational and emotional, others are failures of probabilistic reasoning.We often fail to realize that rare matches are only a very small fraction of a large sample pool of "probable events".To some of us, rare matches may seem like a regular occurrence, but does it really happen? Consider what would happen if we analyzed rare matches in your personal life now.Suppose you are involved in 100 different things on a given day.Given the complexities of life in modern industrial societies, this figure is not an overestimate, and may in fact be an underestimate.You watch TV, make phone calls, meet people in person, discuss directions to work or the mall, do annoying chores, read books for information, complete complex tasks at work, and so on.All of these events contain many individually memorable components.In this way, 100 things are actually not too many, but we will count as 100 things.A rare match is one in which two events are impossibly linked.So how many different, pairwise matching combinations are there between those 100 things in a typical day?Using a simple formula to work out the results, you typically have 4950 different pairings in a day and 365 days in a year.We know that rare matches are memorable, and the day Uncle Bill called could stick with you for years to come.If you counted all the rare matches you remember over 10 years, maybe 6 or 7 (more or less, people have different criteria for rare).How big a probability event sample library are these 6 or 7 events from?4950 pairing events per day, multiplied by 365 days in a year, multiplied by 10 years, gives 18,067,500 pairings.All in all, for every 6 associations that you consider rare matches to occur over 10 years, there are 18,067,494 other pairings that could also be rare matches.So, the probability of a rare match occurring in your life is 0.00000033.6 rare matches out of 18 million events is indeed rare, but not surprising.Rare events do happen, and they are rare, but the element of chance guarantees that they will happen (recall the five coin toss example earlier).In our example, 6 strange things happened to you that could be coincidences: two related events happened at the same time in an uncanny coincidence. Psychologists, statisticians, and other scientists have pointed out that many rare matches are actually less "rare" than is often believed.The famous "birthday problem" is the best example.In a class of 23, what is the probability that two people have the same birthday?Most people would consider it very low.In fact, in a class of 23, the probability that two people celebrate their birthday on the same day is greater than 50%.In classes of 35, the odds are even greater (probability greater than 0.80, see Martin, 1998).So, since there have been 43 presidents in American history, it should come as no surprise that James Polk and Warren Harding were born on the same day (November 2).Likewise, with 38 presidents all dead, it shouldn't be surprising that Millard Fillmore and William Taft died on the same day (March 8), and there were even 3 more Presidents—John Adams, Thomas Jefferson, James Monroe—all died on the same day, which turned out to be the 4th of July, Independence Day!Is the latter magical?It's just a matter of probability. Trying to explain everything that happens in the world while refusing to acknowledge the role of chance actually reduces our ability to predict reality.Acknowledging the role of chance in a field means that researchers must accept the fact that our predictions cannot be 100 percent accurate, and that there will always be some mistakes in predictions.But the interesting thing is that acknowledging that our forecasts are not 100 percent accurate actually helps us improve the accuracy of our forecasts overall.This may sound like a contradiction, but it is true: in order to reduce errors, errors must be accepted (Dawes, 1991; Einhorn, 1986). The notion that we must accept error in order to reduce it can be demonstrated with a very simple experimental task that has been studied for decades in cognitive psychology laboratories (Fantino & Esfandiari, 2002; Gal & Baron, 1996).This experimental task is like this, the subjects sit in front of two lights (one red and one blue), and the experimenter asks them to predict which light will be on for each test. The subjects have to participate in many rounds of such tests, and press Accuracy gives a certain reward.In fact, all tests were performed with the red light on 70% of the time and the blue light on 30% of the time, with the two lights appearing in a random order.During the experiment, the subjects quickly felt that the red light was on more times, so they predicted that the red light would be on in more tests.In fact, they did predict that the red light would come on in about 70 percent of the tests.However, as discussed earlier, the subjects gradually discovered and believed in the pattern of the lights during the experiment, but never thought of the sequence as random.To make their predictions surefire, they alternated between red and blue lights, predicting red lights 70% of the time and blue lights 30% of the time.Rarely did the subjects realize that their predictions would have been better if they had given up on trying to hit the mark!Why is this so? Let's think about the logic behind this scenario.If a subject predicts that the red light will be on 70% of the tests and that the blue light will be on 30% of the tests, given a random 70:30 ratio of red or blue lights, his accuracy will be how much?We'll use the 100 trials in the middle part of the experiment for the calculations - because at that point the subjects have noticed that the red lights come on more often than the blue ones, and thus start predicting that the red lights will come on 70% of the tests.The red light was on 70 times out of 100 tests, so the subjects had a 70% correct rate in these 70 times (because the subjects predicted that the red light would be on in 70% of the tests), that is, the swabbed 49 out of 70 predictions were correct; 30 out of 100 trials the blue light was on and the subject was correct 30% of the time (because the subject predicted the blue light would be on 30% of the time) , that is, the subjects predicted correctly 9 times out of 30.Thus, out of 100 trials, the participants correctly predicted 58 times.But, mind you, what a poor grade this is!If the subjects notice which light is on more, they always predict that light will be on—in this experiment, they just notice that the red light is on more times, so they always predict that the red light will be on (Let's call it a "100% red light strategy") Then, he will have 70 correct predictions out of 100 tests.Although in the 30 trials with the blue light on, the subjects will not have a single correct prediction, but the overall accuracy rate is still as high as 70% - better than the 58% accuracy of switching back and forth between red and blue lights in pursuit of "perfect" The rate is 12 percentage points higher! However, the high accuracy rate achieved by the 100% red light strategy comes at a price: the desire to "perfectly hit every shot" must be given up. (Obviously, when the blue light was on occasionally, the subjects were always predicting that the red light would be on, that is, giving up the chance of hitting in the test with the blue light on).It's all about accepting mistakes in order to make fewer mistakes.Giving up the idea of ​​not making mistakes led to higher overall accuracy.In the same way, when predicting human behavior with a certain accuracy, sometimes it is necessary to accept errors to reduce errors, that is, while relying on general principles to make more accurate predictions, we must also admit that we cannot make accurate predictions in every case. Right about everything. But "accepting mistakes to reduce mistakes" is hard to do.In the field of psychology, this is borne out by 40 years of research on clinical and statistical predictions.统计预测是指依据统计资料中得出的群体趋势所作的预测。本章一开始所讨论的群体(也就是总体)预测就是属于这种预测。一种简单的统计预测是,针对凡是具有某种特征的所有个体,做出相同的预测。例如,预测不吸烟者的寿命是77.5岁,而吸烟的人是64.3岁,就是一个统计预测。如果考虑的群体特征不只一个(运用第5章谈到的复杂相关技术——尤其是多元回归技术)将令我们的预测更加准确。例如,预测吸烟、肥胖且不运动者的寿命是58.2岁,就是在一个多变量(吸烟行为、体重和运动量)基础上的统计预测,这样的预测总是比单变量的预测更加准确。统计预测在经济学、人力资源、犯罪学、商业与市场学以及医学等领域都很常见。 在心理学的许多分支领域,如认知心理学、发展心理学、组织心理学、人格心理学与社会心理学中,其知识都是通过统计预测来表述的。相反,一些临床心理从业者则声称他们可以超越群体预测,对特定个体做出百分之百准确的预测,这种预测被称为临床预测或个案预测。与统计预测相反,临床预测是这样的: 临床预测似乎可以视为是对统计预测的有用补充,但问题是,临床预测并不准确。 如果证明临床预测是有效的,那么一个临床医生与他的病人接触的经验以及有效运用病人所提供的信息,应该使他能够提出比较好的预测,这个预测一定能胜过对病人信息进行编码、然后输入能够对量化数据加工的统计程序而得到的预测结果。总之,有人主张说,临床心理从业者的经验使得他们能够超越尚未由研究揭示的关系。“临床预测是有效的”这一观点很容易验证,不幸的是,经过检验,这一观点被证明是错误的。 对临床预测与统计预测的比较研究所得到的结果始终是一致的。自从保罗·米尔(PaulMeehl)的经典著作《临床预测与统计预测》(Clinica! Versus Statistical Prediction)于1954年出版以来,40年间有超过100个研究表明,在几乎每一个曾经验证过的临床预测领域(精神治疗的效果、假释行为、大学生毕业比例、电击治疗的反应、累犯问题、精神病住院治疗期的长短等等),统计预测都优于临床预测(Dawes,Faust, & Meehl, 1989; Faust, Hart, Guilmette, & Arkes, 1988; Goldberg, 1959, 1968, 1991; Ruscio, 2002; Swetsetal., 2000; Tetlock, 2005)。 在多个临床领域中,研究者给临床心理医生一份病人的信息,让其预测这个病人的行为。与此同时,他们也把同样的信息加以量化,用一个统计方程加以分析,这一方程是以先前研究发现的统计关系为基础编制的。结果都是统计方程大获全胜。这就表明,统计预测比临床预测更为准确。事实上,即使是在临床心理医生可以获得比统计方法更多的资料的情况下,后者仍然比前者的预测更准确。也就是说,临床心理医生除了拥有与统计预测一样的量化资料以外,还拥有与病人单独接触和访谈所得到的资料,但是这并没有令其预测变得像统计预测那样准确。“即使拥有信息优势,临床判断仍然不能超越统计方法;实际上,拥有更多的信息,并不能弥补两种方法之间的差距”(Dawes et al., 1989, p.1670)。产生这种结果的原因当然是统计方程将各种信息数据按照优化标准整合起来,并且做得准确而稳定。优化和稳定这两个因素就让临床心理医生通过非正式方法收集到的资料和信息的优势消失殆尽。 检验临床-统计预测的研究文献中,还包含这么一种方法,那就是给临床心理医生由统计方程得来的预测结果,让其根据自己与病人接触的经验来对这一预测做出调整。结果,临床医生对统计预测做出调整后,预测的准确度非但没有增加,反而降低了(见Dawes,1994)。在这里我们又看到了一个不能“接受错误以减少错误”的绝好例子,与前面所述的那个红蓝灯预测实验非常类似。应当利用灯亮次数多少这一统计信息而采用每次都预测红灯的策略(可以获得70%的正确率)时,被试却为追求次次正确而在红灯与蓝灯之间换来换去,结果正确率反而降低了12%(只有58%的次数是正确的)。同样地,在上述研究中,临床心理医生相信,他们的经验应该可以提供给自己一些“洞察力”,从而得以做出比定量数据更好的预测。实际上,这些“洞察力”根本不存在,他们的预测比依赖公开的统计信息所做出的预测要差。最后需要指出的是,统计预测的优越性并不局限于心理学,它业已扩展到了许多其他临床科学中——例如,医学中对心电图的解读(Gawande,1998)。 对于研究显示统计预测优于临床预测的优势,米尔(Meehl, 1986)曾说:“社会科学中,没有任何一个争议能如这次这般,从这么大量的、性质上如此多样的研究中得到如此一致的结论。”(pp.373-374)。但令人尴尬的是,心理学领域并没有应用这一知识。例如,这个学科在研究生入学与心理健康培训招生等程序中仍然不停地使用个人面试,尽管大量征据表明,面试方法缺乏效度。临床工作者也继续利用一些似是而非的证据来证明他们对于“临床直觉”的依赖是合理的,而不依靠更有效的总体性预测。例如,道斯等(Dawesetal, 1989)曾指出: 关于这一点的一个类比是,问你自己对如下科学发现的反应是什么,这个发现是:完成过多次类似手术的医生,在下一例手术中成功的概率会比较高(Christensen, 1999)。现在有一个医生A,他常做某一类手术,失败的可能性很小,而另一个医生B从没做过这种手术,失败率可能很高,请问,你愿意让这两个医生中的哪一个来为你做手术呢?如果你相信“概率不适用于个案”,那你就不该介意让医生B给你做手术。 在诸如心理治疗效果等问题上,承认统计预测优于临床预测并不会对心理学的声望造成任何损失,因为在医学、商学、犯罪学、会计学甚至是家畜鉴定等许多领域中,这条规律都适用(见Dawes, 1994; Dawes et al., 1989; Dowie & Elstein, 1988)。尽管从总体上说,心理学不会因为这些研究结果而有什么损失,但是对那些以“专家”身份出入各种活动,并让病人相信他们有独一无二的临床个案知识的临床心理从业者来说,当然会造成声誉或者收入上的损失。然而,正如麦佛和瑞特(Mc Fall & Treat,1999)在一篇论述临床评估价值的文章所提醒的那样:“我们试图评估和预测的事情在本质上是概率性的。这意味着我们不能期望大自然会如此听话,能让我们以百分百的把握去预测单一事件。相反,我们最高的期望也只能是鉴别一系列可能的结果,然后去估计每个结果出现的相对可能性。从这种概率的角度看,传统临床评估期望达到的那种理想化目标——对独特的未来事件做出精确的预测——其实太天真了,反映了我们的无知或自大,或二者兼有”(p.217)。 实际上,如果我们将“接受错误以减少错误”变为一种习惯,心理学和整个社会都将从中受益。在试图对每一个不同寻常的事件做出独特解释时(就我们目前的知识情况来说,独特的解释也许根本不可能),我们常常丧失了对更多平常事件的预测能力。请大家再次回想一下红灯-蓝灯实验,诚然,“百分百红灯策略”会对出现概率较小或很少出现的不寻常事件(蓝灯亮)做出错误的预测,但如果我们把注意力放在出现概率较小的事件上,采用“70%红灯、30%蓝灯策略”,结果会怎样呢?我们会在30个不寻常事件中正确预测9次(30x0.3), 其代价是丧失了对21个常见事件做出正确预测的机会,没有对红灯做出70次的正确预测,只获得49次的正确预测(70x0.70)。临床领域中的行为预测也遵循相同的逻辑,为每一个案编造复杂的解释,确实可能抓住一小部分不寻常事件——旦这是以损失了对大多数事件的正确预测为代价的,而在此方面,简单的统计预测则更有效。加望德(Gawande, 1998)指出,医学领域也同样需要学习“接受错误以减少错误”这个道理。他认为在医学里,强调直觉、个别化的治疗方法“是有缺陷的——我们试图承认并考虑人类复杂性的因素,但这非但没有避免错误,反倒招致了更多的错误”(p.80)。 华格纳和科瑞(Wagenaar & Keren, 1986)论证了对个人知识的过分自信以及对统计信息的忽视,会破坏“系安全带驾车”的交通安全推广活动的效果。因为人们总是认为:“我和别人不一样,我驾车很安全”。问题是85%的人都认为“自己的技术比一般驾车者高明”(Svenson, 1981)——这显然是很荒谬的。 “统计数据不适用于单一个案”这一同样的谬误,是导致赌徒积习难改的重要因素。华格纳(1988)在他的赌博行为研究中总结道: 华格纳发现,强迫性赌徒对“接受错误以减少错误”有很强的排斥倾向。例如,二十一点牌局的玩家,普遍拒绝使用一种基本策略(见Wagenaar, 1998, 第2章),这种基本策略可以保证把庄家的胜率从6%或8%降低到不足1%。基本策略是一个长期性的统计策略,强迫性赌徒之所以拒绝它,是因为他们坚信“有效的策略应该是在每一把都有效”(p.110)。华格纳研究中的赌徒“总一成不变地说,这类系统的一般性策略是不会有用的,因为它们忽略了每一个具体情境的独特性”(p.110)。这些赌徒抛弃能保证他们少输上千美元的统计策略不用,转而去徒劳地追求建立在每一具体情境独特性基础之上的“临床预测”。 当然,这里有关临床-统计预测研究文献的讨论,并不意味着个案研究在心理学中毫无价值。请大家记住,这一章所谈的只是“对行为的预测”这一特定情境。回想一下在第4章中对于个案研究价值的讨论,个案信息在引发对重要的、需要进一步研究的变量的关注方面是非常有用的。而这一章中所说的则是,一旦相关的变量已经确定,我们要开始运用它们来预测行为时,测量这些变量并使用统计公式来进行预测始终是最优程序。首先,我们通过统计方法得到了更为准确的预测;其次,统计方式优于临床预测之处在于,统计程序所得出的预测是公共知识,任何人都可以使用、修改、批评或争论。相反,如果使用临床预测就等于要依靠个别权威的评估——由于这类判断太过个别和特殊——因此不能接受公众的评议。正如道斯(Dawes, 1994)所述: 偶然性在心理学中扮演的角色时常被外行人士和临床心理从业者所误解。人们很难认识到,行为事件结果的变化中有一部分是由偶然因素造成的。也就是说,行为的变化有一部分是随机因素作用的结果,因此心理学家不应自诩能够预测每一例个案的行为。心理学的预测应该是概率性的——是对总体趋势的概率性预测。 表示自己可以在个体层次上进行心理预测,是临床心理学家常犯的错误。他们有时候会错误地暗示别人,临床训练赋予了他们一种对个别案例做出准确预测的“直觉”能力。恰恰相反,几十年来,有价值的研究都一致表明:在解释人类行为的原因方面,统计预测(基于群体统计趋势的预测)远远优于临床预测。目前还没有证据表明,临床直觉能预测一个统计趋势是否会在一个特定的个案身上出现。因此,当对行为进行预测时,千万不要对统计资料置之不理。统计预测也昭示,当对人类的行为进行预测时,错误和不确定性将始终存在。
Notes:
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book