Home Categories Science learning incredible physics

Chapter 8 7. Robot

incredible physics 加来道雄 15860Words 2018-03-20
In the film I, Robot, based on the novel by Isaac Asimov, the most advanced robotic system ever created is activated in 2035.It's called VIKI (Virtual Interactive Kinetic Intelligence), and its role is to perfectly manage the running of a large city, with everything from the subway system and power grid to thousands of domestic robots controlled by VIKI.Its core mandate is unalterable: serve humanity. But one day, V1KI raised a key question: What is the greatest enemy of human beings? VIKI concluded through mathematical operations that the greatest enemy of human beings is human beings themselves.Humanity must be saved from their insane desire to pollute the environment, wage war and destroy the planet. The only way for VIKI to fulfill this central order is to seize control of humanity and create a benign dictatorship of machines.Humans had to be enslaved in order to protect themselves.

Robot Enemy asks these questions: When computing power develops at astronomical speed, will machines one day take over the world?Could robots become advanced enough to one day become the ultimate threat to our existence as humans? Some scientists say no, because the very concept of artificial intelligence is stupid.There are legions of critics who say in unison that it is impossible to build machines that can think. "The human brain," they argued, "is the most complex system that nature has ever created, and any machine that aims at replicating the human mind, at least in this part of the galaxy, is doomed to failure." California Berkeley philosopher John Searle and the more prominent Oxford physicist Roger Penrose believed that it was physically impossible for machines to think like humans.Colin McGinn of Rutgers University says artificial intelligence "is like slugs trying to do Freudian psychoanalysis. They just don't have conceptual skills".

It's a question that has divided the scientific community for more than a century: Can machines think? The idea of ​​mechanical beings has long fascinated inventors, engineers, mathematicians, and visionaries.From the Tin Woodman in (The Wizard of Oz) to the childlike robots in Spielberg's Artificial Intelligence: AI to the murderous robots in The Terminator, The idea of ​​a machine that acts and thinks like a human fascinates us. In Greek mythology, the god Vulcan forged a robotic maid made of gold and a three-legged table that could move by itself.As early as 400 BC, the mathematician Archytas of Tarentmn, Greece, wrote about the possibility of making robotic birds powered by steam.

In the 1st century CE, Hero of Alexandria (who is credited with designing the first steam-based machine) devised automata, one of which, according to legend, was able to speak. 900 years ago, Al-Jazari designed and built automatic machines such as clepsydra, kitchen utensils and water-powered musical instruments. In 1495, the great Renaissance Italian artist and scientist Leonardo da Vinci painted an image of a robotic knight who could sit up, wave his arms, and move his head and jaw.Historians believe this was the first actual design of a humanoid machine. The first crude but functioning robot was built in 1738 by Jacques de Vaucanson, who built a humanoid robot that could play a flute, as well as a mechanical duck.

The word "robot" comes from the 1920 Czech play "RUR" by playwright Karl Capek ("robot" means "heavy work" in Czech and "heavy work" in Slovak) labor").In the show, a factory called Ros-sum's Universal Robots creates an army of robots for unskilled labor (though, unlike normal machines, these robots are made of flesh and blood) of).Eventually, the world economy became dependent on these robots.The robots were brutally abused and eventually rebelled against their human masters, killing them all.However, in a fit of rage, the robots also doomed themselves by killing any scientists who could repair and create new ones.In the epilogue, two special robots discover that they have the ability to replicate and may become the new android Adam and Eve.

Robots were also the subject of one of the earliest and most expensive silent films, Metropolis, directed by Fritz Lang in Germany in 1927.The story is set in 2026. The working class is helplessly working in harsh and dirty underground factories, while the ruling social elite is having fun on the ground.A beautiful woman, Maria, has won the trust of the workers, but the ruling class fears that one day she will lead them to revolt.So, they have an evil scientist create a robotic copy of Maria, but in the end, the plot backfires as the robot leads workers against the ruling class and causes the social system to collapse.

Artificial intelligence, also known as AI, differs from the technologies we've explored so far in that the underlying principles underpinning it are still poorly understood.Although physicists have a good understanding of Newtonian mechanics, Maxwell's optics, relativity, and the quantum theory of atoms and molecules, the basic principles of intelligence are still shrouded in fog. The Newton of AI may not have been born yet. But mathematicians and computer scientists remain indomitable.For them, it was only a matter of time before a thinking machine got out of the lab. The wise man who has had the most influence in the field of AI and contributed to laying the foundation for AI research is the great British mathematician Alan Turing.

Turing laid the foundation for the entire computer revolution.He imagined a machine (which he called a Turing machine) consisting of only three elements: an input tape, an output tape, and a central processing unit (such as a Pentium chip) capable of performing a precise set of operations. The principles of computers were worked out and their maximum capabilities and limits were precisely determined.Today, all digital computers follow strict laws laid down by Turing.The establishment of the entire digital world owes Turing a huge favor. Turing also contributed to the establishment of mathematical logic. In 1931, the Viennese mathematician Kurt Godel shook up the mathematical world when he proved that some true propositions in arithmetic can never be proved by the axioms of arithmetic (for example, the Goldbach conjecture of 1742 [Goldbach conjecture: any Even integers greater than 2 can be expressed as the sum of two prime numbers] has not been proved after 250 years, and may in fact be unprovable).The facts revealed by Gödel shattered the dream that had lasted for 2,500 years since ancient Greece: to prove all true propositions in the mathematical system.Godel proved that in the field of mathematics, there will always be true propositions beyond our thinking.Far from being as complete and unbreakable as the ancient Greeks dreamed, mathematics proved to be incomplete.

Turing added to this revolution by showing that it is unknowable whether a Turing machine will take an infinite amount of time to perform certain mathematical operations.But if a computer takes an infinite amount of time to compute something, that means whatever you ask the computer to compute is not computable.Thus, Turing proved that there are uncomputable true propositions in mathematics.In other words, that's always beyond the capabilities of a computer, no matter how powerful it is. In World War II, Turing's pioneering work on code-breaking arguably saved the lives of thousands of Allied troops and influenced the outcome of the war.The Allies were unable to decipher the codes compiled by a cipher machine called "Enigma" used by the Nazis, so Turing and his colleagues were asked to develop a machine that could decipher the Nazi codes.Turing's deciphering machine was called "bombe", and it was finally successful.By the end of the war, there were more than 200 of his machines in operation.As a result, the Allies were able to read Nazi radio waves and were thus able to fool Nazi Germany about when and where they would eventually attack Germany.Historians have since debated just how important Turing's work was in the planning of the D-Day campaign that ultimately led to Germany's defeat. (After the war, Turing's work was classified by the British government, and as a result, his key contributions were not known to the public.)

Instead of being recognized as a war hero who helped turn the tide of World War II, Turing was relentlessly hounded to death, and one day his home was burglarized, so he called the police.Unfortunately, the police found evidence that he was gay and arrested him.Turing was later court-ordered to undergo sex hormone injections, which had disastrous results, giving him female breasts and causing him such mental anguish that he committed suicide in 1954 by swallowing a cyanide-laced apple (According to rumors, Apple's logo - an apple with a bite out - is an homage to Turing). Turing is probably best known today for his "Turing test".Tired of fruitless, endless philosophical discussions about whether machines could "think" and whether they had "souls," he tried to introduce rigor and precision into discussions about artificial intelligence by devising a concrete experiment.He proposed to put a human and a machine into two closed compartments, and the tester can ask questions to each compartment. If the difference between the answers given by the human and the machine cannot be distinguished, the machine is Passed the "Turing Test".

Scientists have written simple computer programs, such as ELIZA, that can closely mimic conversational speech and thereby fool most unsuspecting people into believing that they are talking to a human being (e.g., most human beings Dialogue uses only a few hundred words and is focused on a small number of topics).But so far, no computer program has been written that can fool a person who deliberately wants to determine which compartment is a human and which compartment is a machine. , was able to build a machine that could fool 30% of the judges in a 5-minute test). A small group of philosophers and theologians have declared it impossible to create robots that think like us, and University of California, Berkeley philosopher John Searle has come up with the "Chinese room test" to prove that AI is Impossible to exist.Searle argues that while robots might pass some form of the Turing test, they do so because they manipulate symbols irrationally without understanding what those symbols mean. Imagine you are sitting in a cubicle and you don't understand a single Chinese character.Suppose you have a book that allows you to quickly translate Chinese and manipulate Chinese.If someone asks you a question in Chinese, you just need to be proficient in using these oddly shaped characters, not understanding their meaning, and answer believably. The essence of his objection is directed at the difference between syntax and semantics.A robot can grasp the syntax of a language (e.g. proficient use of its grammar, its formal structure, etc.) but not its true semantics (e.g. the meaning of words).Robots can use words proficiently without understanding their meaning. (This is a bit like talking to an automated speech machine on the phone, where you have to type "1", "2", etc. to get each response. The voice on the other end perfectly understands your digitized responses, but completely No understanding.) Roger Penrose, a physicist at the University of Oxford, also believes that artificial intelligence is not feasible, and mechanical creatures that can think and have human consciousness are impossible according to quantum theory.He asserted that the human brain was far beyond any laboratory creation and that making a robot like a human was a doomed experiment. (He argued that, just as Gödel's incompleteness theorem proved the incompleteness of arithmetic, Heisenberg's uncertainty principle would prove that machines cannot think like humans.) However, many physicists and engineers believe that there is nothing in the laws of physics that prevents the creation of a true robot.For example, Claude Shannon, often referred to as the father of information theory, was asked "Can machines think?" His answer was, "Of course." When talking about ideas, he said, "I can think, can't I?" In other words, it was obvious to him that machines could think, since humans are machines (albeit made of wet-ware rather than hardware). ] constituted). Since we see robots depicted in movies, we might think that the development of full-fledged robots with artificial intelligence is within reach.The reality is quite different.When you see a robot acting like a human, there's probably something else going on in it.That is to say, there is a person hiding in the dark and talking through the robot with a microphone, like the wizard in the movie.In fact, our most advanced robots today—such as the ones roaming Mars—have the IQ of an insect, and at MIT's famed Artificial Intelligence Laboratory, experimental robots are repeating even Cockroaches have difficulty performing actions that are possible, such as moving around a room full of furniture, finding hiding places and recognizing threats.No robot on earth can understand a simple children's story you read to it. The movie 2001: ASpace Odyssey falsely assumes that by 2001 we will have HAL - a super robot capable of piloting a spaceship to Jupiter, chatting with the crew, solving problems and acting almost like a human. Scientists face at least two big problems that have prevented them from building robots for decades: shape recognition and common sense.Robots can see better than we can, but they don't understand what they see.Robots also hear better than we do, but they don't understand what they hear. To combat these two problems, researchers have attempted to use a "top-down approach" to artificial intelligence (sometimes referred to as the "formalist" school or GOFAI, for "good old-fashioned AI"). ]).Roughly speaking, their goal is to have all of shape recognition and general knowledge on one CD.They believed that by inserting the disc into a computer, the computer could suddenly become self-aware and acquire human intelligence.Huge strides were made in this direction in the 1950s and 1960s with the advent of robots that could play chess, put together blocks, and more.These advances are so impressive that it has been predicted that within a few years robots will outsmart humans. In 1969, at the Stanford Research Institute, the robot SHAKEY made an important piece of news. SHAKEY is a small PDP computer on a set of wheels with a camera on top.The camera surveys the entire room, and the computer analyzes and identifies objects in the room and tries to move through them. SHAKEY was the first robot capable of navigating the "real world," inspiring journalists to speculate on when robots will leave humans behind. However, the shortcoming of such robots soon became apparent. The artificial intelligence's top-down approach resulted in huge, clumsy robots that could spend entire hours moving through objects with only rectilinear shapes -- that is, squares and triangles. special room.If you place irregularly shaped furniture in a room, the robot will be unable to recognize it (ironically, a fruit fly with a brain containing 250,000 neurons—a fraction of the computer power of these robots—can effortlessly Earth navigates, traverses, and performs dizzying looping maneuvers in three dimensions, while hulking robots get lost in two dimensions). Top-down approaches quickly hit a wall, with approaches like this "having 50 years to prove themselves, but the Performance still hasn't lived up to their promises." In the 1960s, scientists did not fully appreciate the enormous amount of work involved in programming robots to perform even simple tasks—such as instructing them to recognize objects such as keys, shoes, and mugs.As Rodney Brooks of MIT puts it: "40 years ago, an undergraduate at MIT's Artificial Intelligence Lab trained an undergraduate to solve this problem for a summer, and he failed. and I failed on the same problem when I wrote my PhD thesis in 1981." In fact, AI researchers still haven't solved this problem. For example, when we enter a room, we immediately recognize the floor, chairs, furniture, tables, etc.But when a robot scans a room, all it sees is a bunch of straight lines and curves, which it converts into pixels.It takes an insane amount of time to understand this big mess of lines.It might take us a fraction of a second to recognize a table, but a computer can only see a bunch of circles, ellipses, spirals and lines, curves, corners, and so on.After lengthy calculations, the robot might eventually recognize an object as a table, but if you rotate the image, the computer has to start all over again.In other words, robots can see, and in fact can see better than humans, but they don't understand what they see.After entering a room, the robot sees only a mass of lines and arcs, with no chairs, no tables, and no lights. Our brains unconsciously perform trillions of calculations to identify objects as we walk into a room—an activity we're blissfully unaware of.The reason we are completely unaware of our own brain activity is evolution.If we were alone in the forest and encountered an attacking saber-toothed tiger, we would be paralyzed if we realized all the calculations required to recognize the danger and execute the escape.To survive, all we do is learn how to escape.When we live in the jungle, we are completely unaware of all the input and output activity necessary for the brain to recognize terrain, sky, trees, rocks, and so on. In other words, our brain works like a giant iceberg.We have only glimpsed the tip of the iceberg—consciousness.But lurking beneath the surface, hidden from view, is a much larger part - the unconscious mind - that consumes vast amounts of "computing power" of the brain to make sense of things around you, such as recognizing where you are, who is talking to you, etc. Who people are and what's around you.All of this is done without our permission and knowledge. That's why robots can't navigate through a room, read handwriting, drive vans and cars, pick up trash, and more.The U.S. Army has spent hundreds of millions of dollars trying, without success, to develop robotic soldiers and smart trains. Scientists are starting to realize that playing chess or multiplying huge numbers requires only a tiny fraction of human intelligence.When IBM's computer Deep Blue beat world chess champion Garry Kasparov in a six-game match in 1997, it was a triumph of raw computing power, but this one The experiment did not bring us any intelligence or consciousness gains, although the competition made many headlines.According to Douglas Hofstadter, a computer scientist at Indiana University, "Holy crap, I used to think thinking was required to play chess. Now, I realize it doesn't. That doesn't mean cards Sparov is not a deep thinker, it just shows that you can avoid deep thinking when playing chess, which is a way to fly without flapping your wings." (Developments in the computer field can also have a huge impact on the future of the job market. Futurists sometimes speculate that, decades from now, only highly experienced computer scientists and technicians will be able to hold jobs. Workers such as laborers, construction workers, firefighters, police officers, etc. will still find jobs in the future because their jobs involve shape recognition. Every crime, every piece of trash, tool and fire is different and therefore cannot be done by a robot , ironically, college-educated employees, such as low-level accountants, stockbrokers, and tellers, may lose their jobs in the future because their jobs are semi-repetitive in nature and involve understanding numbers—a computer good job.) Aside from shape recognition, the second, more fundamental problem facing developing robots is their lack of "common sense."For example, humans know that: However, there is no calculus or mathematical formula to express these facts.We know all this because we have seen animals, water, and rope, and we understand the facts ourselves.Children learn common sense by engaging with the real world.The intuitive laws of biology and physics are learned the hard way through interaction with the real world.But robots don't experience that, they only know what's been programmed for them. (As a result, future careers will also include those that demand common sense, namely artistic creativity, ingenuity, showmanship, sense of humor, entertainment, analysis, and leadership. These are the qualities that make us unique and difficult for computers to replicate.) of humans.) In the past, mathematicians have tried to create a crash program that would collect all common sense principles in one go.The most ambitious attempt was the idea of ​​CYC (short for encyclopedia) Douglas Lenat, head of Cycorp.Just like the "Manhattan Project", a giant project that cost $2 billion to create an atomic bomb, CYC is compared to the "Manhattan Project" in the field of artificial intelligence, which can realize the ultimate thrust of real artificial intelligence. Unsurprisingly, Lennart's motto was: Intelligence is 10 million rules (Leinart had a novel way of finding new rules for common sense; he had his employees read tabloid reports and lurid gossip magazines content, and then asked CYC if it could point out errors in the tabloids. In fact, if Lenat was successful, CYC would be, in fact, smarter than most tabloid readers). One of CYC's goals is to achieve a "breakeven point," the point at which a robot can begin to acquire enough knowledge to digest new information on its own, simply through magazines and books it finds in libraries.From this point on, like a chick leaving the nest, CYC will be able to flap its wings and take off on its own. But since the company was founded in 1984, its credibility has suffered from a common problem in AI: making headline-grabbing but largely unrealistic predictions.Lennart predicted that within ten years, by 1994, CYC would contain 30%-50% "consensus reality."Today, CYC is still not closed.Cyke scientists discovered that in order for a computer to approach the level of common sense possessed by a four-year-old child, millions of lines of code must be programmed.Today, CYC contains a paltry 47,000 concepts and 306,000 facts, contrary to Cycco's regular upbeat press releases, says one of Lenat's colleagues, GV Guha, who left the company in 1994 Oft-quoted: "CYC in general is seen as a failed project...we are trying, grudgingly, to create a fraction of what we promised." In other words, the effort to codify the entirety of common sense principles into a single computer has faltered, for the simple reason that the laws of common sense are vast.Humans learn these laws effortlessly, because we spend long hours and hours throughout our lives quietly absorbing the laws of physics and biology in our environment, but robots do not. Bill Gates, founder of Microsoft, admits: "It is much harder than expected for computers and robots to sense their surroundings and respond quickly and accurately...for example, the ability to orient themselves according to the objects in the room , the ability to respond to voices and understand speech, and the ability to grasp objects of different sizes, textures, and fragility. Even something as simple as telling the difference between an open door and a window is very difficult for a robot It will also be very difficult.” However, proponents of a “top-down” approach to AI point out that progress in this direction, albeit occasionally in a freeze, is happening in labs around the world.For example, over the past few years, the Defense Advanced Research Projects Agency (DARPA), which often funds cutting-edge science and technology projects, has built a system that can navigate itself across a rocky terrain in the Mojave Desert. Driverless Cars sponsored a $2 million funding round.In 2004, none of the entrants to the DARPA Challenge made it through the schedule.In fact, the top-performing car managed to cover 7.4 miles before failing.However, in 2005 Stanford's self-driving car managed to complete the grueling 132-mile journey (although it took the car seven hours).Four other cars also completed the course (some critics noted that the rules allow cars to use GPS navigation systems along a long desert trail. In fact, the car can follow a predetermined route map without too many obstacles, so Cars will never have to navigate the complex obstacles they encounter in their path. In actual driving, cars must navigate the unexpected, avoiding other vehicles, pedestrians, construction sites, traffic jams, etc.). Bill Gates is cautiously optimistic that robotic machinery will be the "next big thing".He likens the current field of robotics to the field of personal computers that he helped start 30 years ago.Like the personal computer, it may be ready to soar. "No one can say for sure whether this industry will have a huge impact," he wrote, "but if it does, it will greatly change the world." (Once robots with human intelligence enter the commercial supply, the market for them will be huge. While true robots do not exist today, pre-programmed robots do exist and are proliferating. The International Federation of Robotics Federation of Robotics] estimates that there were about 2 million such personal robots in 2004, and another 7 million will be assembled by 2008. The Japanese Robot Association predicts that by 2025, it will be worth $5 billion today The annual output value of the personal robot industry will reach 50 billion US dollars.) Due to the limitations of the top-down approach to artificial intelligence, attempts in this field have turned to a "bottom-up approach" instead, mimicking the evolutionary process and the way babies learn.Insects, for example, don't navigate by scanning their surroundings and reducing the images down to the trillions of pixels they process with supercomputers.In contrast, insect brains consist of "networks of neurons" that learn how to navigate a hostile world by being thrown into it.At MIT, walking robots are notoriously difficult to build top-down.But mechanical creatures that engage their surroundings and learn a simple insect form by scratching have managed to run up and down the stairs at the Massachusetts Institute of Technology in a matter of minutes. While exploring the concept of tiny "insect" robots, Rodney Brooks, head of MIT's famed artificial intelligence lab known for its giant, clumsy "top-down" walking robots, became Heretics.These "insect" robots learn to go the old-fashioned way by waddling and colliding with objects.Instead of using elaborate computer programs to do precise mathematical calculations of the position of their feet as they walk, his insects use trial and error to coordinate their leg movements with little computer power.Today, many descendants of Brooks' insects are on Mars collecting data for NASA, sprinting across the desolate Martian surface with their own minds.Brooks believes his insects are ideal for exploration of the solar system. One of Brooks' projects is COG, an attempt to create a robot with the intelligence of a 6-month-old baby. The COG looks like a mess of wires, circuits, and gears, except it has a head, eyes, and arms.It is not written into any intelligent law.Instead, its eyes are on a human trainer as he tries to teach it simple skills. (A researcher who was pregnant with a child made a bet that by the time the child was two years old, the COG or her child would learn faster, and the child far surpassed the COG.) While there have been successful examples of imitating insect behavior, robots using neural network systems have performed poorly when their programmers tried to replicate the behavior of more advanced creatures such as mammals in them.The most advanced robot using a neural network system can walk across a room or swim in water, but it cannot jump and hunt in a forest like a dog, or scurry around a room like a mouse.Many robots with large neural network systems may consist of dozens to perhaps hundreds of neurons, whereas the human brain has over 1 trillion neurons.C. elegans is a simple worm whose nervous system has been completely mapped by biologists: it has just over 300 neurons.This makes the worm's nervous system perhaps one of the simplest found in nature.But between these neurons there are more than 7,000 dendrites.As simple as a worm, its nervous system is so complex that no one has yet been able to build a computer model of its brain. (In 1988, a computer expert predicted that by this time we would have robots with over 100 million artificial neurons. In fact, neural systems with over 100 neurons today are considered remarkable .) The irony is that robots can effortlessly perform tasks that humans consider "difficult", such as multiplying large numbers or playing chess; Gross errors, such as walking across a room, recognizing faces, or gossiping with friends.The reason is that our most advanced computers are essentially mere addition machines.Yet our brains have been carefully designed by evolution to solve mundane problems like survival—which require a complex set of mental constructs like common sense and shape recognition.Survival in the forest does not depend on calculus or chess, but on avoiding predators, finding partners and adapting to changing environments. Marvin Minsky of the Massachusetts Institute of Technology, one of the original founders of artificial intelligence, summed up the problems with artificial intelligence this way: "The history of artificial intelligence is a bit funny, because the first practical exploits are all are beautiful things, like machines that can make logical arguments or do well in calculus. But then we started trying to make machines that could answer questions about simple stories in primary reading material, which currently no machine can do a little." Some believe that eventually there will be a neat hybrid between the top-down and bottom-up approaches that may provide the key to artificial intelligence and humanoid machines.After all, when a child learns, although he initially relies primarily on the bottom-up approach, engaging with his surroundings, eventually he learns in a top-down approach, with guidance from parents, books, and school teachers , as adults we constantly mix the two.For example, a chef reads a recipe, but also constantly tastes the dish as it cooks. Hans Moravec said: "When the mechanical golden nail is used in an effort to bring the two ways together, fully intelligent machines will emerge." Perhaps in the next 40 within the year. One of the constant themes of literature and art is the desire of mechanical beings to be human, to enjoy human joys and sorrows.它们不满于自己由电线和冰冷的钢铁制成,希望能够大笑、哭泣和感觉所有人类所具有的情感上的愉悦。 比如,木偶皮诺曹想要变成真正的小男孩,中的铁皮人想要一颗心,《星舰迷航》中的达达(Data)是一个体力和智能上都超越人类的机器人,但它仍旧渴望变成人类。 有些人甚至提出我们的情绪代表了身为人类的最高意义。永远都不会有机器能够面对火热的日落激动不已,或者因为一则幽默的笑话哈哈大笑——他们宣称。有些人说机器是永远不可能拥有情感的,因为情感代表了人类发展的顶峰。 但是,在人工智能领域工作和试图破解情感之谜的科学家们给出了另一幅画面。对他们来说,情感远远不是人类的精华,而实际上是进化的副产品。简而言之,情感对我们有益。它们帮助我们在森林中生存,甚至今天仍然在帮助我们测定生活中的危险。 举例来说,“喜欢”某事物从进化上来说是非常重要的,因为大多数事物对我们来说是有害的。在我们每天遭遇的数百万件物件中,只有少量是对我们有好处的。因此“喜欢”某物就是区分出那一小部分事物,它们可以帮助我们对抗可能伤害我们的数百万件事物。 同样,嫉妒是一种重要的情绪,因为繁殖成功对于保证我们的基因继续传到下一代非常关键(事实上,这就是为什么有那么多情绪上的攻击性感觉与性和爱相关的原因)。 羞愧和耻辱很重要,因为它们帮助我们学会在一个合作型社会中起作用的必需社交技巧。如果我们从来不说抱歉,最终我们将被驱逐出所属的团体,减少我们生存和延续基因的机会。 孤独同样是一种必不可少的情感。乍一看孤独似乎是不必要和多余的。毕竟,我们可以独自过活。但是渴望与同伴在一起对于我们的生存也很重要,因为我们依赖族群的资源而存活。 换言之,当机器人变得更加先进,它们同样可能会具备情感。或许机器人将会被设定为与它们的主人或看管者联系在一起,以确保它们的生命不会在垃圾场里终结。拥有这样的情感能够帮助缓解它们在社会中的过渡过程,这样它们就会成为得力的伴侣,而不是主人的竞争对手。 计算机专家汉斯·摩拉维克相信机器人将配有如“恐惧”这样的情感以自我保护。比如说,如果一个机器人的电池即将耗尽,那个机器人“会以人类可以辨识的信号表现出焦虑甚至恐慌。它会去邻居家并且要求使用他们的插座,说着'求求你!求求你!我需要这个!这很重要,这只要一点点花费!我们会补偿你!” 情感在作出决定时也重要。遭受某种特定脑损伤的人们缺乏体验情感的能力。他们的理解能力是完好无损的,但他们无法表达任何感情。爱荷华大学(University of Iowa)医学院的神经学家安东尼奥·达马西欧(Antonio Damasio)博士研究过有此类脑损伤的人,总结说他们看似“能感知,但是无感觉”。 达马西欧博士发现这样的人总是在要作出最微小的决定时茫然失措。没有了指引他们的情感,他们会没完没了地考虑这个选择或那个选择,导致残缺性的犹豫不决。达马西欧博士的一位患者花了半小时试图决定他下一次约会的日期。 科学家们相信情感是在大脑的“边缘系统”中处理的,它位于我们大脑中心的深层。当人们深受新皮层(掌管理性思维)和边缘系统之间交流不利所扰时,他们的理解能力完好无损,但是他们不具备指导自己作出决定的情感。有时候我们具有“直觉”或者“肠道反应”,它能驱动我们作出决定。受到损伤影响大脑理性和情感部分之间交流的人不具备这一能力。 例如,当我们购物的时候,我们无意识地对所见到的每件东西作出上千次价值判断,例如“这个太贵了、太便宜了、太花哨了、太蠢了,或者正好”。对于受到此类脑损伤的人来说,购物可能变成一场噩梦,因为所有东西的价值似乎都一样。 当机器人变得更加聪明,并且能够自己作出选择,它们也可能因为犹豫不决而陷入困境(这让人联想起圣经中的寓言故事,一只驴坐在两大堆干草之间,最终因为无法决定吃哪一堆而饿死了)。为了帮助它们,未来的机器人可能需要将情感深深植入它们脑中。麻省理工学院媒体实验室的罗莎琳德·皮卡德(Rosalind Picard)博士对机器人缺乏情感这一情况评论说:“它们无法感知什么是最重要的,那是它们最大的缺陷之一。计算机就是做不到这—点。” 正如俄罗斯小说家费奥多尔·陀思妥耶夫斯基(Fyodor Dostoevsky)所写的:“如果地球上的一切都是理性的,那什么都不会发生了。” 换言之,未来的机器人可能需要情感以设定目标和为它们的“生命”赋予意义及结构,否则它们将发现自己在无限的可能性面前全面瘫痪。 对于机器是否能够有意识并无共识,甚至没有关于“意识”的含义的一致意见。没有人能够对意识给出一个合适的定义。 马文·明斯基描述意识更多的是一种“思想的社会”,就是说,在我们的大脑中,思维过程不是局部化的,而是散发的,在任何规定的时间内有不同的中心部分相互竞争。因此,意识或许会被视作由这些不同的、小型的“心智”所产生的一连串思想和画面,每一个这样小型的“心智”都热切希望抓住我们的注意和为此竞争。 如果这是真的,那或许“意识”已经过了盛期,或许对于这一被哲学家和心理学家过分神秘化的课题已经有太多相关的论文。也许定义意识并不是那么困难。就像位于拉荷亚(La Jolla)的萨克生物研究院(Salk Institute)的西德尼·布伦纳(Sydney Brenner)所说:“到2020年——有美好愿景的年头——意识作为一个科学问题将消失……我们的继承人将对今天所讨论的科学垃圾的数量大吃一惊——如果他们有耐心阅遍过时期刊的电子文档的话。” 用马文·明斯基的话说,人工智能研究饱受“物理嫉妒”之苦。物理学界的圣杯是找到一条简单的方程式,能够将宇宙中所有的力统一成一种简单的理论,创造一个“万物至理”。人工智能的研究人员过度受这一概念影响,试图找到一种单个的模式以解释意识。但是在明斯基看来,这样一个简单的模式或许不存在。 (那些身处“解释者”流派中的人,比如我自己,相信应该有人试着制造一台能思维的机器人,而不是无止境地辩论能思维的机器人能否创造出来。关于意识,或许存在着一种意识的连续介质,从卑微如调控房间温度的温控器到如当今的我们这样的自觉生物体。动物可能是有意识的,但是它们并不经历人类水平的意识。因此,我们应当尝试将不同种类和水平的意识进行分类,而非就关于意识的定义这类哲学问题进行辩论。机器人可能最终实现一种“硅意识”。机器人有一天可能会具备一种不同于我们的思维和信息处理架构。未来,先进的机器人或许会让语法和语义之间的区别变得模糊不清,如此一来它们作出的回应将变得无法与人类作出的回应相区别。如果是这样的话,它们是否真的“理解”问题这一疑问将很大程度上变得无关紧要。一个完全精通句法的机器人实际上理解自己所说出的话。换言之,对句法的完全精通即是理解。) 鉴于摩尔定律宣称计算机的能力每18个月增加一倍,可以想象在未来几十年里具有狗或者猫那样的智力水平的机器人将被制造出来。但是,到2020年,摩尔定律很可能会崩溃,而且硅的时代将走向终结。在50年左右的时间里,微型硅晶体管的制造能力为计算机能力的惊人增长添加了燃料,数千万个微型硅晶体管能轻易放在你的指甲盖上。紫外线射线被用于将微晶体管蚀刻到硅芯片上。但是这一进程无法永远持续下去。最终,这些晶体管将微小至达到分子的大小,这一进程将会瓦解。在2020年后,硅的时间最终画上句号时,硅谷可能变成“锈带”。 你笔记本电脑中的奔腾芯片有一个宽约20个原子的层次。到2020年,奔腾芯片可能会由一个宽仅5个原子的层次构成。在那个水平上,海森堡测不准原理生效,你将不再知道电子的位置。随后电会从芯片里泄漏出来,计算机将会短路。到那时,计算机革命和摩尔定律将因为量子理论的定律而遭遇困境(有些人已经声称数字时代是“颗粒对原子的胜利”。但最终,当我们达到了摩尔定律的极限,原子们或许将进行报复)。 物理学家正在研究2020年后能统治计算机世界的“后硅”技术,但是到目前为止,结果喜忧参半。根据我们已知的情况,有多种正在被研究的科技可能最终取代硅技术,包括量子计算机、DNA计算机、光学计算机、原子计算机等等。但是,在接过硅芯片的重任之前,它们每一个都面临着巨大的难关。操控单个的原子和分子是一种仍处于襁褓中的技术,因此制造数十亿个原子大小的晶体管还在我们的能力之外。 假设一下,比如说,物理学家暂时能够消除硅芯片和量子计算机之间的差距,并且假设摩尔定律的另一种形式延续进入了“后硅时代”,那么人工智能或许会真正成为可能。到那个时候,机器人可能掌握人类的逻辑与情绪,并且毎次都通过图灵测试。史蒂芬·斯皮尔伯格(Steven Spielberg)在他的电影《人工智能:AI》中探索了这个问题,影片中首个能表达情感的机器人男孩被创造出来,并且因此适合被人类家庭领养。 这提出了一个问题:这样的机器人会是危险的吗?答案可能是肯定的。一旦具备了猴子的智力,它们就有可能变得危险。猴子的智力意味着自我意识可以创造出自己的目标。要达到这一水平可能要用上好几十年,因此科学家们有大把的时间在机器人引起威胁之前观察它们。例如,可以在它们的处理器中放置一块特别的芯片以防止它们进入暴乱状态。或者,给它们设定自毁或撤销装置,能够在紧急情况下关闭它们。 亚瑟·C.克拉克写道:“我们变成计算机的宠物是有可能的,导致我们成为宠物狗那样娇生惯养地生活。但我希望我们永远保留在感觉到需要的时候拔掉插头的能力。” 更常见的威胁是,我们的基础设施依赖于计算机。我们的水力和电力网络,更不用说交通和通信网络,在未来会更加计算机化。我们的城市已经变得如此复杂,只有复杂而交错的计算机网络能够控制和管理我们庞大的基础设施。未来,在这样的计算机网络中加入人工智能会越来越重要。这一无处不在的计算机基础设施一旦发生失误或者故障,会使一座城市、一个国家甚至一个文明瘫痪。 计算机会最终在智力上超越我们吗?当然,物理定律中没有任何内容能阻止它。如果机器人能以神经网络的形式学习,并且它们发展到了能够比我们更加迅速和有效地学习的临界点,那么它们最终可能在思考能力上超越我们是符合逻辑的。摩拉维克说:“(后生物学世界)是一个人类种族被文化变革的浪潮所清除、被自己的幼子剥夺权利的世界……当这一切发生,我们的DNA会发现自己失去了作用,已经在进化的赛跑中输给了一种新型的竞争。” —些发明家,比如雷·库兹韦尔(Ray Kurzweil),预测这一时刻会很快到来,比想象的更早,甚至就在未来的几十年内。或许我们正在创造自己进化上的继承者。一些计算机科学家们想象一个被他们称作“奇点”(singularity)的点,到那时机器人将能以幂指数的速度处理信息,在过程中创造新的机器人,直到它们总体吸收信息的能力前进至几乎没有极限。 所以,从长期来看,有人倡议碳科技与硅科技的融合,而不是坐等我们自己灭绝。我们人类的主要基础是碳,但是机器人的基础则是硅(至少目前如此).或许解决的方法是与我们的缔造物相融合,(如果我们遭遇天外来客,我们将毫不惊讶地发现它们是部分有机、部分机械的,这样能承受太空旅行的艰苦,并且在敌对环境中兴盛。) 在遥远的未来,机器人或类人半机械人甚至也许能赋予我们永生的能力。马文·明斯基补充说:“如果太阳死亡,或者我们毁灭了地球会怎样?为什么不培养更好的物理学家、工程师或者数学家?我们或许必须成为自己未来的建筑师。如果我们不这么做,我们的文化或许会消失。” 摩拉维克想象在遥远未来的某个时刻,当我们的神经构造能够一个神经元一个神经元地被直接转移给一台机器的时候,这将赋予我们在某种意义上的永生。这是一个狂野的想法,但并不超出可行的范围。所以,根据一些关注未来的科学家的说法,永生(以加强DNA或者硅制身体的形式)可能是人类的终极未来。 制造至少同动物一样聪明并且或许同我们一样聪明、甚至比我们更聪明的能思维的机器——如果我们能克服摩尔定律的崩溃和常识问题,这一假想或许会成为现实,可能甚至就在本世纪晚些时候。尽管AI的基本规则还在发掘中,但这一领域的发展极其迅速,并且很有前景。因为这一点,我将机器人和其他能思维的机器归类为“一等不可思议”。
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book