Home Categories Science learning digital survival

Chapter 10 4. See and feel to make the computer see

digital survival 尼葛洛庞帝 3389Words 2018-03-20
Compared with modern bathrooms or outdoor floodlights equipped with sensors, personal computers are really insensitive to human presence.A cheap autofocus camera knows more about what's in front of it than any terminal or computing system, and thus has more intelligence than a computer. When you lift your hand from your computer keyboard, the keyboard doesn't know if you paused for thought, took a natural break, or ran out to lunch.It can't tell if it's talking to you alone, or if there are 6 other people standing in front of it.It also doesn't know if you're wearing evening or party attire, or nothing at all.Because of this, you might just have your back turned to it when it's displaying important information on the screen, or you might just walk away and not hear it when it's talking to you.

Our focus today is entirely on how to make computers easier for people to use.Maybe now is the time to ask the question: How can we make computers easier to get along with?For example, if you don't know whether the person you're talking to is actually there, how can you discuss something with them?You can't see them, and you don't know how many of them there are.Are they smiling?Are they paying attention to what you are saying?We talk longingly about human-computer interaction and dialogue systems, yet we deliberately keep the parties involved in the conversation in the dark.

Now is the time for computers to see and hear.You can't get tired of reading it all the time. Research and applications of computer vision have long been geared almost exclusively to situational analysis.This scenario analysis is especially used for military purposes, such as unmanned vehicles and smart bombs.The application of computers in outer space has also led to the latest developments in technology.If you let a robot roam on the moon, it is not enough for the robot to just transmit the image it sees to the operator on the earth, because even if it is transmitted at the speed of light, it still takes too long.If the robot walks to the edge of the cliff, when the human operator sees the cliff in the video and hastily sent a message to the moon to tell the robot not to go any further, the robot has already fallen.This is just one example of scenario analysis.In this case, the robot must make its own judgment based on what it sees.

Scientists are not only learning more about images, but have also developed techniques that, for example, can infer shapes from lightness or darkness, or extract objects from their backgrounds.But it's only recently that scientists have begun to look at computers' ability to recognize people in order to improve human-machine interfaces.In fact, your face is your display device, and the computer should be able to read it.Therefore, it must be able to recognize your face and your unique expressions. Our expressions are closely related to what we want to express.When we talk on the phone, we don't blank our faces just because the person on the other end of the line can't see us.In fact, sometimes in order to enhance the weight and tone of the spoken language, we will mobilize the muscles of the face more, accompanied by more exaggerated gestures.Computers can sense our expressions and receive complex and parallel signals, thus enriching our spoken and written information.

Enabling computers to recognize human faces and expressions is a daunting technical challenge.Still, in some cases, this is perfectly achievable.In a one-on-one situation with you, the computer just needs to know if it’s you operating the computer, and that it’s not anyone else on Earth sitting in front of it.In addition, it is very easy to separate people from the background. Chances are that in the not-too-distant future, computers will be able to see you. When the Gulf War broke out in 1990-1991, much business travel was banned, so teleconferencing proliferated.Since then, more and more personal computers have been equipped with inexpensive teleconferencing equipment.

The hardware of the teleconference includes a TV camera set up in the center of the monitor, and the hardware and software capable of encoding, decoding and displaying all or part of the image on the computer screen in real time.Personal computers will be more and more fully prepared for image communication. The designers of the teleconferencing system did not think of using cameras on personal computers so that we can enjoy face-to-face computer communications.But so what?Neil Gersenfeld at our Media Lab did an interesting study comparing a $30 mouse that takes a few minutes to learn to a $3 cello that takes a lifetime to master bow.He compared 16 bowing techniques with mouse clicks, double clicks and drags.The bow of the cello is designed for musicians, and the mouse is designed for people like you and me.

As far as graphical input is concerned, the mouse is a simple but cumbersome medium.There are 4 steps to using the mouse: 1) Fumble to find the mouse; 2) Shake the mouse to find the cursor; 3) Move the cursor to where you want it; 4) Click or double-click the mouse button.The innovative design of Apple's "Powernotebook" computer reduces these steps to at least three, and uses a "stationary mouse" (recently changed to a "trackpad") that can move with the fingers, thus reducing the stress of typing. Interference is reduced to a minimum. When drawing pictures, the mouse and trackball can't do anything.Believe it or not, try using a trackball to sign a signature.At times like this, it's far better to use a "data pad," that is, a ballpoint pen-like tip that operates on a smooth surface.

There are not many computers equipped with graphics data boards, and those computers equipped with data boards seem to be suffering from schizophrenia. I don't know how to place the data board and keyboard properly, because it is best to place both in the center below the monitor.The way to resolve the conflict is usually to put the keyboard under the monitor, because most people (including me) don't touch the pattern at all. As a result, the data pad and the mouse are placed side by side, and we have to learn some unnatural hand-eye coordination. While you operate the data pad or mouse below, you keep your eyes on the screen; in other words, we draw by touch.The light pen and data wrench mouse were invented by Douglas Engelbart in 1964.At the beginning, he designed the mouse to point files, not draw pictures.But the invention survived and can be seen everywhere today.Jane Alexander, president of the National Endowment for the Arts, recently joked that only a man would think of calling it a mouse.

A year before she spoke, Ivan Sutherland had perfected the concept of drawing directly on the screen with a stylus (their air defense systems used some crude stylus in the 1950s).Sutherland's method is: tracking a cross-shaped cursor composed of five light spots.To stop drawing, just flick your wrist to exit tracking.This is a neat, but less precise, way to terminate a line. Today, the light pen is virtually gone.Because it's one thing to hold your hand in front of a screen (not to mention that it's exhausting to hold that position for a long time when the blood is continuously running down the palm), but holding a tube tethered to the computer, A pen that weighs two ounces can make the palm and arm extremely tired.Some styli are half an inch in diameter and feel like writing a postcard with snow in front of you.

Drawing on a data pad is exceptionally comfortable, and with a bit of thoughtful design, the nib can produce the texture and richness of an artist's brush.Until now, data slates have often felt like drawing with a ballpoint pen on a smooth, rigid board, so finding a place for the board to rest on your desk is close to you and your monitor.Now that our desks are full of stuff, the only way for datapads to catch on is for furniture manufacturers to build datapads directly into the tabletop, so there is no separate datapad, just the table itself .Your Eyes Can Talk Imagine reading words on a computer screen and asking: What does that mean?Who is she?How did I get there?The "that", "she" and "that place" in the question are determined by the direction your eyes are looking at at that time.These questions involve the point of contact between your eyes and the document.We don't usually think of our eyes as output devices, but we always use our eyes to output information.

Human beings are able to detect the direction of each other's eyes and communicate with each other. This ability is really amazing.Imagine a person standing 20 feet away who sometimes looks directly into your eyes, and sometimes looks over your shoulder, looking into the distance.Make the person's gaze less than a degree away from yours, and you'll feel the difference right away.This is how it happened? You certainly didn't figure it out using trigonometry, in other words, calculating whether the other person's line of sight intersects your line of sight.No, there are creek stilts among them.There must be a message between your eyes and the person's eyes, but we don't know how.Tracking eye movements After all, we use our eyes to point at objects all the time, and when someone asks you where so-and-so has gone, your answer might just be to stare at the open door.When you explain what to bring, you may be staring at one suitcase instead of another.This gaze indication, combined with head movement, can be a very powerful communication channel. Today, there are technologies that can track eye movement.One of the first technologies I saw was an eyetracker worn on the head.When you read the contents of the file, the tracker changes the text on the screen from English to French.When your center of sight is constantly moving from one word to another, every word you see is in French, so the whole screen appears to be 100% French.However, about 99 percent of the screen seen by bystanders whose eyes were not being tracked was in English (that is, all the words were in English except for the one the person wearing the tracker was looking at in French). More modern eye-tracking systems use remote TV cameras, so the user doesn't need to wear anything.Teleconferencing configurations with video are especially good for eye tracking because users tend to be seated at a relatively constant distance in front of the screen, and you're usually looking into the eyes of the person you're communicating with remotely (the computer knows the eyes Location). The better the computer knows about your position, posture, and eye characteristics, the easier it is to know where you're looking.Ironically, this unorthodox medium of using the eyes as an input device may have first been applied to a rather prosaic construct: a person sitting at a computer desk. Of course, if you use your eyes (seeing) and another input tool-mouth (speaking) at the same time, the effect will be better.
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book