Home Categories social psychology Say No to "Pseudo-Psychology"

Chapter 6 Chapter 2 Falsifiability—How to Catch the Elf in Your Mind

In 1793, a severe epidemic of yellow fever hit Philadelphia.At the time, the city had a leading physician named Benjamin Rush, who was one of the signers of the Declaration of Independence.During the course of the disaster, Rush was one of the few doctors who actually treated thousands of cases of yellow fever.Rush embraced a medical theory that yellow fever must be treated with massive exsanguination (sucking blood with a scalpel or leeches to get the blood out of the body).He practiced this treatment on many patients, and he did the same when he contracted the disease himself.Critics accused his treatments of being even more dangerous than the disease itself.As the disease became more prevalent, however, Rush grew more confident in his treatment, even though several patients died.Why is this?

Someone summed up Rush's attitude in this way: "On the one hand, he firmly believes that his theory is correct, and on the other hand, he lacks an effective method to conduct systematic research on the treatment effect, so he attributes every improved case to the efficacy of the treatment method. Instead, each death is attributed to the severity of the disease" (Eisenberg, 1977, p. 1106).In other words, if the patient got better, it was taken as proof that the bloodletting was effective; if the patient died, it was interpreted by Rush as a terminally ill patient with no cure.We now know why the critics of Rush were right: his treatment was as dangerous as yellow fever itself.In this chapter, we're going to discuss what went wrong with Rush.His error provided a sample for illuminating one of the most important principles of scientific thinking, one that is especially useful in evaluating psychological theories.

In this chapter, we focus on the third fundamental feature of science already discussed in Chapter 1: Science deals only with solvable problems.What scientists call a "solvable problem" usually means a "testable theory."The way scientists determine whether a theory is testable is to make sure that the theory is falsifiable, that is, that the theory corresponds to real events in the natural world.Next, we'll look at why the so-called falsifiability criterion is so important in psychology. Benjamin Rush fell into a fatal trap in assessing the effects of his treatments.His method of evaluation made it impossible to conclude that his treatment was ineffective.If the patient's recovery is an affirmation of the effectiveness of his treatment (an affirmation of his medical theory), it is only fair if the patient's death is a denial of his treatment.But in fact, he rationalized this denial.The way Rush explained the evidence violated one of the most important principles of scientific theory building and testing: He made his theories unfalsifiable.

A scientific theory should be formulated in such a way that predictions drawn from it may be shown to be wrong.Therefore, to evaluate new evidence for a theory, the new data must have the possibility of falsifying the theory.This principle is often referred to as the "falsifiability criterion".A philosopher named Karl Popper has been devoted to emphasizing the important role of the falsifiability criterion in the scientific process, and his articles are widely read by scientists still engaged in scientific research (Magee, 1985) . The falsifiability criterion asserts that, for a theory to be useful, the predictions it makes must be unambiguous.A theory has to do both. It can be said that while the theory tells us what will happen, it should also tell us what will not happen.If what should not happen does happen, we get a clear signal that something is wrong with the theory: it may need to be revised, or we need to find an entirely new theory.Either way, we'll end up with a theory that's closer to the truth.Conversely, if a theoretical prediction includes all possible observed data, it can never be revised, and we will be locked into our current way of thinking and unable to make progress.That said, a successful theory cannot explain all possible outcomes because such a theory itself loses any predictive power.

In the remainder of this book, we will often deal with the evaluation of theories, so we must clear up a common misconception about theories.This misunderstanding manifests itself in the common phrase: "Oh, that's just a theory." This phrase represents what laymen usually mean when they use the word "theory": an unproven hypothesis , a pure guess or intuition.This means that one theory is not better than the other. That's definitely not how the word "theory" is used in science.When scientists say "theory," they don't mean untested conjectures.

A theory in science is a set of interrelated concepts that explains a set of data and makes predictions about the outcomes of future experiments.Hypotheses are specific predictions that emerge from a theory (theory is more general and comprehensive).Currently viable theories are those that generate a number of hypotheses, many of which have been tested.The theoretical structure of this theory is thus consistent with a large number of empirical observations.However, when observational data begin to contradict the hypotheses proposed in the theory, scientists try to construct a new theory that provides a better explanation for the data (or, more often, just revise existing theories) .Therefore, what is being discussed in the scientific context today are all theories that have been confirmed to a certain extent, and the predictions made are not contradicted by the existing data.They are not pure guesses and intuitions.

This difference in the use of the word "theory" by laymen and scientists is often exploited by devout orthodox Christians who seek to incorporate creationism into public school education (Forrest & Gross, 2004; Scott, 2005; Talbot, 2005).Their argument is usually "Evolution is just a theory after all".This view attempts to borrow the layman's usage of the term "theory" and deliberately distorts the theory as "just a conjecture".However, the theory of evolution by natural selection is not a "theory" in the layman's sense (rather, it should be called a "fact" in the layman's sense, see Randall, 2005), but a theory in the scientific sense , is a conclusion supported by a series of large and diverse data (Maynard Smith, 1998; Ridley, 1996, 1999; Scott, 2005).It is not equivalent to any other guess, not a pure guess.Rather, it is closely related to knowledge belonging to other disciplines, including geology, physics, chemistry, and various branches of biology.The famous biologist Theodosius Dobzhansky (1973) in his paper "Nothing in Biology Makes Sense Except in the Light" of Evolution) expounded this point of view in the famous article.

Let's assume an example to show how the falsifiability criterion works.A student is knocking on my door.A colleague in the same office as me has a theory that "different people knock at the door at different rhythms".Before I opened the door, my colleague predicted that there was a woman behind the door.I opened the door, and the student was indeed a woman.I told colleagues afterward that I was amazed by his performance, but only to a limited extent, because, even without what he called "knocking rhythm theory," he was right about 50 percent of the time.He said his predictions were above chance.Another knock on the door, my colleague predicts, is a male and under 22 years old.I opened the door and sure enough it was a boy, and I knew he had just graduated from high school.I admit I was a bit shocked that my university had a fair amount of students over 22.Of course, I still insist that young males on campus are fairly common.Seeing that I was so hard to please, my co-worker offered to do one last test.After the next knock, my colleague predicts: Female, 30 years old, 5'2", holding book and satchel in left hand, knocking with right hand.After opening the door, the predictions were fully borne out, and my reaction to that was very different.I have to say that I would be in shock right now if my co-workers hadn't arranged for these people to show up at my door through subterfuge.

Why should my reaction be different?Why did my co-worker's three predictions give me three different "so what?" to "whoa!" responses?The answer has to do with the specificity and granularity of forecasting.The more refined predictions, the greater the impact they will have on us when they are confirmed.Note, however, that variation in granularity is directly related to falsifiability.The more specific and refined a prediction is, the more observations have the potential to falsify it.For example, there are plenty of women who aren't 30 and 5'2".Note the implication here: as I can see from my contrasting responses, I am most easily won over by a theory that predicts the most unlikely events.

Good theories make predictions that always show themselves to be falsifiable.Bad theories don't put themselves at risk in this way, they make predictions so general they always turn out to be true (e.g. the next person to knock on my door will be 100 years old below) or, alternatively, the predictions will be worded in a way that is immune to falsification (as in Benjamin Rush's example).In fact, when a theory is placed under the umbrella of "unfalsifiable," it can be said that it is no longer science.In fact, the philosopher Karl Popper emphasized the importance of the falsification principle precisely because he was trying to define the criteria that distinguish science from non-science.There is even a direct connection between the discussion here and our discussion of Freud in Chapter 1, and even psychology.

In the early decades of this century, Popper sought to understand why some scientific theories seemed to lead to advances in knowledge, while others to intellectual stagnation (Hacohen, 2000).For example, Einstein's theory of general relativity led to a series of startling discoveries (such as the bending of light from a distant star as it passes near the sun) precisely because it was constructed to predict that many events or phenomena Confirmation that contradicts it falsifies the theory. Popper points out that some theories that stagnate knowledge do not, citing Freud's psychoanalysis as an example.Freud's theory uses a complex conceptual structure to explain human behavior ex post but not ex ante.It could explain everything, but it was this property, Popper argued, that made it scientifically useless.It does not make specific predictions.Adherents of psychoanalytic theory spend a great deal of time and effort trying to use their theory to explain all known human activity—from individual eccentricities to broad social phenomena—but they succeed in making the theory a rich source of ex post facto explanations. resources, it also deprives them of all scientific utility.Today, Freud's psychoanalytic theory plays a more important role in stimulating the literary imagination than in contemporary psychology (Robins, Gosling, & Craik, 1999, 2000). Its declining status in psychology is partly due to its failure to meet falsifiability criteria. The existence of such unfalsifiable theories leads to practical harm.As one commentator pointed out: "Incorrect but widely disseminated ideas about psychology will inevitably cause harm to society. Because the reputation of psychoanalysis was artificially raised for a time, many influential people in society People with diseases and genetic defects refuse effective treatment and turn to the individual's early experience to find the source of their existing suffering" (Crews, 1993, p. 65).Take tics as an example.It is a disorder characterized by bodily twitches and spasms, accompanied by verbal symptoms such as grunting, barking, mimetic (involuntary repetition of another's words), and foul language (compulsive repetition of obscene words).Tourette's disease is an organic disorder of the central nervous system that has been successfully tackled with drug therapy (Bower, 1990, 1996a).Throughout history, patients with Tourette's disease have been persecuted. In the early days, they were regarded as demons by religious rulers. In modern times, they were considered to be possessed by ghosts and ghosts, and they were forced to exorcise them (Hines, 2003).More importantly, between 1921 and 1955, the explanation and treatment of the disease were dominated by the conceptual system of the psychoanalytic school, which largely hindered people's understanding of the cause and treatment of the disease ( See Kushner, 1999).Unfalsifiable psychoanalytic explanations of the disorder abound.The conceptual quagmire created by these plausible explanations has obscured the nature of the disorder and hampered further scientific inquiry.For example, one author once wrote: Shaphroeta L, 1978 mentions a psychoanalyst who believed that his patient was "unwilling to give up her tics because it became the source of her sexual pleasure and the expression of her subconscious sexuality".According to another psychoanalyst, convulsions are "equivalent to masturbation . . . the libido associated with genital pleasure is transferred to other parts of the body".A third thought the convulsions were a "migratory symptom of anal sadism".The fourth believes that patients with Tourette's disease have "obsessive-compulsive personality and narcissistic tendencies", and the patient's tics "represent an emotional symptom, a repressive defense against wanting to express emotion".A summary of the state of the art for such theories by Shapiro et al. (1978) nicely illustrates the detrimental effects of ignoring the falsifiability criterion: Progress in the understanding and treatment of Tourette's began when researchers admitted that psychoanalytic "interpretations" were of no use in treating the disorder.Explanations that are useless are alluring because they seem to explain things.In fact, they all explain everything after the fact.However, the explanations they offer create the illusion of understanding.By always trying to explain everything after the fact, they block the door to progress.Progress only occurs when a theory does not predict everything, but makes specific predictions—telling us in advance which specific situations will arise.Of course, predictions derived from such theories may be wrong, but that is an advantage, not a disadvantage. It is not difficult to identify conceptual systems that cannot be falsifiable if one can step out from the problem being studied, and especially if one is guided by history (as in the case of Benjamin Rush).It is also easy to detect its unfalsifiability when the examples are clearly fabricated.For example, people don't know it yet, I have discovered that there is a brain mechanism that controls behavior, and you will soon see this discovery in the gossip magazines everywhere.I discovered that near the language area of ​​the left hemisphere of the brain lived two little sprites with the ability to control electrochemical processes in many areas of the brain.And, long story short, they basically control everything.However, there is one problem that prevents us from seeing them, and that is that the sprites have the ability to detect any invasion of the brain (surgery, x-ray, etc.), and as soon as they become aware of detection from the outside world, they will disappear (I forgot to say, They have stealth capabilities). No doubt I'm insulting your intelligence here with a more schoolboy example.Obviously this example is something I made up, but my assumptions about sprites can never be proven wrong.However, think about it.As an introductory psychology lecturer and public speaker, I am often asked why I am not teaching about the amazing new discoveries that have been made in extrasensory perception (ESP) and psychics over the past few years.I had to tell these questioners that most of the pertinent information they got was undoubtedly from the popular media, not from sources recognized by the scientific community.In fact, some scientists have paid attention to such claims, but have not been able to replicate the findings.I would like to remind readers that reproducibility is crucial to accepting a research result as established scientific fact, especially when the research result contradicts previous data or existing theory. I can even honestly say that many scientists have lost patience with ESP research.The reason, of course, has to do with the deceit, charlatanism, and media hype rife with the field, but the more important reason for the awakening of the scientific community is what Martin Gardner (1972) calls a "catch-22 of ESP research." Here's how it works: A "believer" (someone who believed in the existence of the ESP phenomenon before starting the investigation) claims to have proven ESP in the lab.A "skeptic" (one who doubts the existence of ESP) was invited to confirm the phenomenon.Often, after observing the experimental situation, the skeptic asks the believer for more control (we discuss this type of control in Chapter 6), and although these requests are sometimes denied, usually the well-meaning believer They will agree to their request.This phenomenon no longer occurs when experimental controls are added (see Alcock, 1990; Hines, 2003; Humphrey, 1996; Hyman, 1992, 1996; Kelly, 2005; Marks, 2001; Milton & Wiseman, 1999) .Skeptics will rightly explain this failure—earlier confirmation of this phenomenon was due to a lack of adequate experimental controls, so the conclusions cannot be accepted.But they are often surprised to find that the adherents do not admit that the earlier proof is invalid.Instead, they cite a "catch-22" of ESP: Mental energy, they insist, is sensitive, subtle, and susceptible to influence.The "negative vibes" of the skeptic are responsible for dismantling this "extrasensory energy".Believers believe that when the doubter's "negative aura" is removed, this psychological energy will undoubtedly return. This way of explaining the inability to verify ESP in the lab is logically similar to my made-up elf story. ESP works like a pixie.As long as you look closely at it without being intrusive, it's there.If you observe it, it is gone.If we accept this explanation, it becomes impossible to prove the phenomenon to skeptics.This phenomenon appears only for believers.Of course, this statement is unacceptable in the field of science.We don't distinguish between magnetic physicists and non-magnetic physicists (ie magnetic fields exist only for the former).Interpreting the ESP experiment in this way makes the ESP hypothesis as unfalsifiable as the Elf's hypothesis.It is this way of explaining that ESP is excluded from the pantheon of science. The principle of falsifiability has important implications for how we view the verification process of a theory.Many people believe that a good scientific theory is one that has been proven many times over.They hypothesize that the number of confirmations is the key to evaluating a theory.However, the principle of falsifiability means that the number of times a theory has been verified is not the most important factor.The reason is that, as Knock Rhythm Theory demonstrates, not all confirmations are equal.Whether the confirmation is convincing depends on the extent to which the prediction exposes itself to possible falsification.A very specific, potentially falsifiable prediction (e.g., a woman, 30 years old, 5 feet 2 inches tall, holding a book and copybook in her left hand, knocking on a door with her right hand) is more accurate than 20 unfalsifiable predictions (e.g. , a person younger than 100) has a stronger persuasive power. Therefore, we should not only focus on the quantity of proven theories, but also on the quality of the verification itself.Using falsifiability as a criterion enables those who use research findings to resist the temptation of unscientific, catch-all theories.Such a theory of omnipotence will inevitably hinder our deeper exploration of the world and human nature.In fact, the dead ends of such theories are the most fascinating, because they can never be falsified.In the ever-changing modern world, this theory has remained unchanged for thousands of years. Popper often pointed out, "The secret of the enormous psychological appeal of these (unfalsifiable) theories is that they can explain everything. Knowing in advance that whatever happens, you can understand it not only gives you intellectual control sense, and, more importantly, the security you need to cope with the world” (Magee, 1985, p. 43).However, the acquisition of this sense of security is not the goal of science, because the pursuit of this sense of security is at the cost of stagnation in the development of knowledge.Science is a mechanism for continually challenging existing beliefs, where they are empirically tested in a way that can be falsified.This feature often puts science (especially psychology) in direct conflict with so-called conventional wisdom or common sense (as we discussed in Chapter 1). Psychology threatens the comfort that conventional wisdom can provide because, as a science, it cannot just offer explanations that cannot be disproved.The goal of psychology is to empirically test and screen various behavioral theories one by one.Certain forms of conventional wisdom that are articulated and amenable to empirical testing are certainly welcome in psychology, and much of this has been incorporated into psychological theory.However, psychology does not pursue the kind of theory that can explain everything after the fact, but cannot make any predictions in advance, and does not pursue the sense of comfort brought by this explanation system.It rejects systems of secular wisdom designed to be immutable and passed down from generation to generation.Trying to hide this from students and the public is certainly self-destructive.Unfortunately, some psychology instructors and popularizers perceive the distress that psychology's threat to conventional wisdom causes to some, and they sometimes try to appease this sentiment by delivering misinformation such as "You'll learn some Interesting stuff, but don't worry, psychology doesn't challenge your deeply held beliefs."This is a mistake that confuses both "what is science" and "what is psychology". Science seeks conceptual change.Scientists try to paint a true picture of the world that may be the exact opposite of our inherent beliefs.There is a dangerous tendency in modern thought to think that the true nature of the world should be avoided by the general public, and that a veil of ignorance is necessary to keep the public from being overwhelmed by the truth.Psychology, like other sciences, rejects the idea that the truth is hidden from humans.Biologist Michael Ghiselin further claims that we all lose when knowledge is not widely available: Like Giseling, psychologists believe that we all lose when we are surrounded by people who misunderstand human behavior.Public attitudes toward education, crime, health, productivity, child welfare, and many other important issues shape our world.If these attitudes stem from faulty theories of behavior, then we all suffer. One of the most liberating and useful implications of the falsifiability principle, scientists have found, is that in science it is not a sin to be wrong.Falsified hypotheses provide scientists with information that they can use to adjust the theory so that it more closely agrees with the data.The philosopher Daniel Dennett (1995) has said that the essence of science is to "make mistakes in public" (p. 380).By continually revising the theory when the data disagree with it, scientists eventually construct a theory that better reflects the nature of the world. In fact, the quality of our lives would be greatly improved if we could apply the principle of falsifiability in our daily lives.This is why I used the word "liberating" in the first sentence of this section.It contains a personal expectation that the ideas generated here can also have implications for fields beyond science.If we can understand all this, when our beliefs conflict with observed facts, we are better off adjusting our beliefs rather than denying the facts and clinging to false ideas, and we will have fewer personal and social problems.Physicist Robert Oppenheimer argued: When you've been in a heated argument with someone—perhaps when you've defended your point with a powerful counterattack—how many times have you suddenly realized that you've gotten a key fact or argument wrong?What will you do at this time?Would you walk back and admit your mistake to someone else, while acknowledging that their explanation now seems more plausible than yours?Maybe not.If you're like most of us, you've been "on an endless search for some rationale to justify your earlier mistakes."You try to get out of the argument without admitting defeat.The last thing you want to do is admit that you were wrong.In this way, both you and your opponent will be more confused: which belief is closer to the truth?If arguments cannot be made public (as in science), if true and false beliefs are contested with equal intensity, and if the outcome of arguments cannot be properly fed back (as in this case), there is no more reliable mechanism Align belief with reality.This is why so many private and public conversations are confusing, and why psychological science is much more reliable at explaining the causes of human behavior than so-called common sense or conventional wisdom. Mistakes are normal in science, and the real danger to scientific progress is the inherent human tendency to avoid exposing our beliefs to situations where they could be proven wrong.Many scientists have confirmed the importance of this idea.Nobel laureate Peter Medawar (1979) wrote: Many of psychology's most prestigious scientists follow Midwar's advice.In an article reporting on the career of experimental psychologist Robert Crowder, one of his colleagues, Mahzarin Banaji, was quoted as saying: “He was the least A scientist who defends his own theory. If you discover a way to show that his theory has holes, or if his experimental findings are limited or flawed, he will be very happy and plan with you how to disprove the theory" (Azar, 1999, p.18).Azar (1999) describes how Knoder proposed a theory of memory components called "precategorical auditory memory" and then carefully designed an experimental study that disproved his own model.Finally, evolutionary psychologist John Tooby (2002) writes in a wonderful review of the attitudes that made Darwin's enduring contribution to science: "Darwin went further than his contemporaries , because he is not bound by the urge to 'make the universe fit his expectations'" (p. 12).Philosopher Jonathan Adler (1998) put it another way: "A truly enlightened person is willing to follow the lead of the evidence. An enlightened person is willing to follow impartial investigation rather than his own predictions. Science Method is a validation of the world, not of ourselves" (p. 44). But for science to work, not every scientist who does it needs to have a falsification attitude.Jacob Bronowski (1973, 1977) pointed out in many of his articles that the unique power of science to reveal the true knowledge of the world does not arise from the unique virtues of scientists (that is, they are completely objective, they never interpreted the findings with bias, etc.).In fact, this power arises because fallible scientists are in a process of verification and balance.In this process, there are always other scientists who criticize and find mistakes in their peers.The philosopher Daniel Dannett (2002) has made the same argument: not every scientist has to exhibit Robert Knoder's objectivity.As Bronowski and Dennett emphasize, scientists are as fallible as anyone else, but recognizing the sources of error in themselves and the groups they belong to, they devise ingenious systems to restrain themselves and try to prevent One's own weaknesses and biases influence one's own research results (p.42).Psychologist Ray Nickerson (1998) puts the same point in a more humorous way: scientists' vanity actually plays a role in the scientific process, "Scientists' arrogance to their ideas Some critical attitudes have not contributed to scientific success to a large extent... It is more true that every scientist actively wants to prove that some of the views held by some scientists are wrong" (p.32) .According to these authors, the strength of scientific knowledge does not derive from the virtues of scientists but from the social process by which they continually cross-check each other's knowledge and conclusions. The previous discussion of testing conventional wisdom leads us to another interesting corollary of the falsifiability principle: ideas are cheap.More precisely, we mean that certain classes of ideas are worthless.Biologist and science writer Stephen J. Gould (1987) put it this way: Gould's answer to the last question is: "It doesn't work." The cheap ideas Gould is referring to here are the ones we mentioned earlier in our discussion of Karl Popper's views: all-encompassing, complex, "vague ’, grand theories that explain everything — theories that are constructed more to provide emotional support, because they are not intended to be changed or discarded.Gould tells us that such theories are useless for scientific purposes, however soothing they may be.Science is a creative process, but that creativity requires fitting conceptual structures to experimental data.It's not easy to do.Ideas that faithfully explain the real world are not cheap at all.Perhaps this is why good scientific theories are so hard to come up with, and unfalsifiable pseudoscientific belief systems proliferate, since the latter are much easier to construct. Scientific theories are closely connected with the world.They are falsifiable and make clear and specific predictions.In fact, formulating real theories that science can actually explain is a difficult task.But understanding the general logic of how science works is not that difficult.In fact, there are now quite a few books on the logic of scientific thinking written specifically for children (Kramer, 1987; Swanson, 2001, 2004). In explaining the principle of falsifiability, we have sketched a simple model of scientific progress.Theories are formulated, hypotheses are derived from them, and the hypotheses are then tested by various techniques or methods—techniques that we discuss in the rest of the book.If the hypothesis is tested by some experiments, the theory has some degree of confirmation; if the hypothesis is falsified by experiments, the theory has to be changed to some extent, or replaced by a new theory. Of course, while scientific knowledge is provisional and theoretically derived assumptions may be wrong, this does not mean that everything has to be tested.There are many theories in science that have been confirmed countless times, and they are called "axioms" because they are nearly impossible to disprove by future experiments.It is unlikely that we will one day discover that blood does not circulate, or that the Earth is not in orbit around the Sun.These well-known facts are not the hypotheses we have been discussing.Nor are they of interest to scientists, since they are already established.Scientists are only interested in questions that are outside the scope of existing knowledge: they are not definitive. This aspect of scientific practice—where scientists focus on the frontiers of known facts and ignore those that are well established (so-called axioms)—is difficult for the general public to understand.Scientists always seem to place more emphasis on the unknown than on the known.This is absolutely true, and scientists have good reason to do so.To advance knowledge, scientists must remain at the frontier of the known.Of course, this is where a lot of things are uncertain.But scientific progress happens precisely through this process of trying to reduce uncertainty at the known frontier.This characteristic often makes scientists viewed by the public as "out of line."But that's just scratching the surface, and scientists are only uncertain about the frontiers of knowledge -- which pushes our understanding of things ever-increasingly.Scientists do not doubt facts that have been repeatedly confirmed by many studies. It is also important to emphasize that when scientists falsify a theory by observation or substitute a new theory for an old one, it does not mean that they are throwing aside all the facts from which the old theory was based (as we will see in Chapter 8 discusses this topic).On the contrary, the new theory should be able to explain all the facts that the old theory can explain, and it should also be able to explain the facts that the old theory cannot explain.Just because a theory has been falsified doesn't mean scientists have to construct an entirely new theory.The process of theory revision is well illustrated by science writer Isaac Asimov in an article titled "The Relativity of Wrongs" (1989), in which he talks about our understanding of How the understanding of the shape of the Earth has been refined.他首先提醒我们,不要以为“地球是平的”这一古老信念是愚蠢的,在平原上(大部分有文字的人类文明都发源于平原),地球看上去相当平坦。阿西莫夫要求我们试着对不同的理论进行定量的比较,看结果会告诉我们什么。首先,我们能够将不同理论表述为它们预测地球表面每公里曲率的大小。 “地平理论”会说每公里的曲率为0。现在我们都知道,这种理论是错误的。但从某种意义上说,它又很接近真理。正如阿西莫夫(1989)所述: 当然,科学没有止步于“地球是球状的”这一理论。我们早先讨论过,科学家们一直在尝试尽量改进他们的理论,并挑战当前知识的局限。例如,牛顿的引力理论预言地球并不是完美的球形,这个预言确实被证实了。现在已经证明,地球在赤道附近略微凸起,而在两极附近略微扁平。这是个被叫做“扁球体”的形状。地球从北极到南极的直径是7900英里,赤道直径是7927英里。所以,地球的曲率并不是一个常数(像一个理想的圆球那样),而是在每英里上有约7.973英寸到8.027英寸的微小变化。正如阿西莫夫(1989)所言:“从球体到扁球体的修正比从平面到圆的修正要小得多。因此,虽然'地球是球状的'这一理解有误,但严格地说,它没有错到'地球是平的'那种程度。”阿西莫夫关于地球形状的例子为我们展示了科学家们使用错误、误差和证伪这些术语的不同情境。这些术语并不是说被检验的理论错得一无是处,这些理论仅仅是不完善的。所以当科学家强调说理论是暂时性的、可能被未来的研究发现所修正的时候,他们所指的就是例子当中的情形。当科学家相信地球是球状的时,他们认识到在未来某一天,这个理论需要在细节上进行修正。无论如何,从球体到扁球体的变化维持了地球是一个球体的“大体正确性”。我们绝不会在某天醒来突然发现它其实是一个立方体。 临床心理学家斯科特·利连费德(Scott Lilienfeld,2005)向心理学专业的学生介绍了阿西莫夫的观点: 科学家们提到“可解的问题”时,通常指的是“可检验的理论”。“可检验的理论”的定义在科学上是非常明确的:这个理论是有可能被证伪的。如果一个理论不可证伪,并且和自然界的真实事件没有关联,那么它就是无用的。心理学里一直充斥着不可证伪的理论,这也正是心理学发展缓慢的原因之一。 好的理论能够做出具体的预测,具有高度的可证伪性。相比于一个不精确的预测,一个明确具体的预测如果得到证实,会为产生这个预测的理论提供更大的支持。简言之,可证伪性原则的一个含义就是,并非所有理论的验证都具有同样的价值。可证伪性越高,预测越具体,得到证实的理论就越受青睐。即使预测并没有得到证实(比如它们被证伪了),可证伪性对于理论的发展也是有用的。一个被证伪的预测说明,原有理论要么应当抛弃,要么需要进行改变以解释不一致的数据。正是通过这种由被证伪的预测所引发的理论修正,像心理学这样的科学才能逐步向真理逼近。
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book