Home Categories social psychology Out of Control: The New Biology of Machines, Society, and the Economy

Chapter 59 11.4 Dealing with errors

As bullish as I am about the network economy, there are still many concerns that exist in other large, decentralized autonomous systems as well. they are difficult to understand they are less easily controlled they are not optimal When companies de-physical into a sort of Barlow-style cyberspace, they take on something akin to software.No pollution, no weight, fast, useful, mobile and fun.But it can also get very complicated, full of annoying glitches that no one can pinpoint. What would it mean if the companies and products of the future were like the software of today?A TV that will break?A car that suddenly stalls?An exploding toaster?

Large software programs are probably the most complex things humans can create today.Microsoft's new operating system has 4 million lines of code.Of course, after testing at 70,000 test points of the Beta version, Bill Gates will definitely say that there are no loopholes in this software now. So, is it possible to make something that is super complex and has no bugs (or, very few bugs)?Can the network economy help us create a complex system without flaws, or can it only build us a complex system with loopholes? Whether or not companies themselves will become more like software, at least more and more of the products they produce will depend on increasingly complex software, so creating complex systems without defects is absolutely necessary.

In the field of simulation, verifying the authenticity of a simulation is the same kind of problem as testing whether a large and complex software is flawed. Canadian computer scientist David Parnas once made eight criticisms of Ronald Reagan's Star Wars program.His argument is based on the inherent instability of super-complex software, and the Star Wars project is just such a super-complex software.One of the most interesting of David Parnas' ideas is that there are two types of complex systems: continuous and discontinuous. When General Motors tests a new car's ability to handle tight corners, it will test the car at different speeds, such as 50, 60, and 70 miles per hour.Obviously, the change in performance with speed is continuous.If a car can pass a test at 50, 60, 70 miles an hour, we know without testing that it will certainly pass at various intermediate speeds like 55 or 67 miles an hour .

They don't have to worry about the car suddenly sprouting wings or flipping upside down while traveling at 55 mph.Its performance at that speed is basically some kind of interpolation of its performance at 50 and 60 mph.A car is a continuous system. Computer software, distributed networks, and most living systems are discontinuous systems.In complex adaptive systems, you simply cannot rely on interpolation functions to judge the behavior of the system.Your software may have been running smoothly for several years, and then suddenly at some specific point (say, 63.25 miles per hour), the system explodes with a bang, or mutates into something entirely new.

Breakpoints are always there, and you've tested all adjacent values, but not this particular set of ambient values.After it happens, you can see at a glance why the failure caused the system to crash, and you can even clearly point out why people should have found this hidden danger.However, this is all in hindsight.In a system with a vast number of possibilities, it is simply impossible to test all of them.To make matters worse, you cannot rely on sampling to test the system because it is a discontinuous system. For a super-complex system, the tester does not have any confidence that the untested values ​​will necessarily show a continuous relationship with the sampled data.But despite this, there is now a movement towards "zero-defect" software design.Needless to say, this movement must have happened in Japan again.

For small programs, the zero of this "zero defect" is 0.000.But for that kind of very large program, this "zero" refers to less than or equal to 0.001.This refers to the error value allowed per thousand lines of code, and this is only an approximate standard of product quality.These approaches to writing zero-defect software draw heavily on the pioneering work of Japanese engineer Shigeo Shingo on zero-defect production.Of course, computer scientists claim, "software is not the same".Software can be copied perfectly, so it is only necessary to ensure that the original copy is "zero defect".

In a networked economy, the cost of developing a new product comes primarily from the design of the production process rather than from the product design.The Japanese are good at the design and improvement of the production process, while the Americans are good at the design and improvement of the product.The Japanese see software as a production process rather than a product.In an emerging online culture, more and more of what we produce—and certainly more of our wealth—is bound up with symbolic processes that assemble code rather than physical objects . Software reliability master CK Cao once warned people in the industry not to regard software as a product, but as a portable factory.What you sell, or in other words, what you give to the customer is a factory (program code) that can create an answer for the customer when he needs it.Your puzzle is to make a factory that can produce zero-defect answers.The same methods that build factories that produce perfectly reliable devices can easily be applied to creating factories that produce perfectly reliable answers.

Typically, software compilation follows three centralized key steps.Start by designing a big picture, then implement the details in code, and finally, towards the end of the project, test it as an interactive whole.In the design process of zero-defect quality, the entire software compilation process is no longer a few large key steps, but is dispersed into thousands of small steps.Software design, writing, and testing are carried out in hundreds of small workshops every day, and one person is busy in each small workshop. These zero-defect evangelists have a catchphrase that encapsulates the network economy: "Everyone in the company has a customer."Generally speaking, this so-called client is your working partner, and you have to transfer the work to him in turn.And you have to get your little loop (design-write-test) right before you can deliver it to your coworkers—just like you're selling something.

When you deliver your work results to your client/work partner, he/she will immediately test it and feed back the errors to you so that you can make changes and let you know your work. How is the work done.In a sense, this bottom-up process of software development is not fundamentally different from Rodney Brooks's inclusive structure.Each small step is a small code module that ensures its own correct operation, on top of which people stack and test more complex layers. These small steps alone do not lead to zero-defect software. The goal of "zero defects" implies a key conceptual distinction.The so-called defect refers to the error that is delivered; and the error that is corrected before delivery is not a defect.According to Shigeo Shingo, "It is absolutely impossible for us to avoid mistakes, but we can prevent mistakes from becoming defects."Therefore, the task of zero-defect design is to find errors as early as possible and correct them as soon as possible.

However, this is the obvious thing.The real improvement lies in finding the cause of the error early and removing the cause of the error early.If a worker keeps inserting the wrong bolts, put in a system to prevent misinserting bolts.It is people who make mistakes, and it is the system that handles them. The classic Japanese invention in the field of poka-yoke is a poka-yoke system called Poka-Yoke - it makes things "immune" to the mistakes people make.A few ingenious and simple devices on the assembly line can prevent mistakes from happening.For example, set a special hole position for each bolt on the pallet where the bolts are placed, so that if there are bolts left on the pallet, the operator will know that he has missed one.In software production, there is a type of error-proofing design that is "spelling error checker", which does not allow the programmer to enter any misspelled commands, and even does not allow him/her to enter any illegal (illogical) commands.Software developers have a growing choice of very sophisticated "autocorrectors" that check the programs they're writing for typical errors.

Then there are those top-notch R&D tools that can analyze and evaluate the logic of a program -- it will say, "Hey! This step doesn't make sense!", thereby cleaning up logic errors as soon as they appear.A software industry trade magazine recently listed nearly a hundred error detection and correction tools for sale.The most elegant of these also provide programmers with logical choices for correcting mistakes, as those good spell-checking software do. Another very important error-proofing method is the modularization of complex software. A study published in IEEE Transactions on Software Engineering in 1982 showed how, other things being equal, programs with the same total number of lines of code were divided into subroutines to reduce the number of bugs.A 10,000-line program, if it is a whole block, has 317 errors, if it is divided into three subroutines, then the total number is still a 10,000-line program, and the number of errors is slightly reduced to 265.The amount of error reduced by each split roughly corresponds to a linear equation, so although modularization cannot completely solve the problem, it is an effective means. Furthermore, when the program is smaller than a certain threshold, it can reach a state of being completely error-free. The code written by IBM for their IMS series is compiled in a modular way, and three-quarters of the modules have reached a state of being completely defect-free.Specifically, out of 425 modules, 300 were completely bug-free.In the remaining 125 modules with errors, more than half of the errors occurred in only 31 modules.In this sense, the modularization of programming is the "reliability" of the program. In the field of software design, the hottest frontier right now is so-called "object-oriented" software.An object-oriented program (OOP) is actually a relatively decentralized, modular program.For an OOP, one of its "fragments" is an independent unit that maintains its own integrity; it can be integrated with other OOP "fragments" to form a decomposable instruction structure. The "object" limits the damage that a bug in the program can do.Unlike that kind of traditional programming, OOP effectively isolates functionality, confining each functionality to a manageable unit, so that even if one object crashes, the rest of the program can continue to function, and For traditional programs, if something goes wrong in one place, the whole program will crash.The programmer can replace this broken unit, just like we can change the brake pads of a car.Software sellers can buy or sell various pre-compiled "object" libraries to other software developers, and the latter can quickly assemble large-scale software based on the objects in these libraries, without having to renew a line as before. Write new code one line at a time.And when it comes time to upgrade this large software, all you have to do is to upgrade old objects or add new ones. The "objects" in OOP are actually like the small pieces in Lego (Lego) building blocks, but these small pieces may also carry very small intelligence.An object could be like a folder icon on a Macintosh monitor, except that the icon knows it's a folder and can respond to a program's request to list the contents of all folders.An OOP could also be a tax form, or an employee's database in a company, or an email message.The subject knows what it can and cannot do, and is communicating horizontally with other subjects. Object-oriented programming enables software with a moderate degree of distributed intelligence.Like other distributed entities, it is somewhat error-resistant, capable of quick repairs (by deleting objects), and scales through the assembly of efficient units. As mentioned earlier, there were 31 errors in IBM's code.And the modules that contain these errors fully illustrate a characteristic of software-errors always appear in clusters.We can use this feature to achieve sigma precision in quality management.Zero Defect Software, the bible of the zero-defect movement, writes, "The next bug you find is very likely to be in a module where you've already found 11 bugs, and those modules that never had bugs are probably will remain undefeated." The phenomenon of bug clustering is so common in software that it is regarded as a "devil's law": when you find a bug, it means that there are other piles that you don't see. Where is the error waiting for you. The remedy mentioned in "Zero Classic" is this, "Don't spend money on buggy code, ditch it! The cost of rewriting a piece of code is not much different from the cost of patching a buggy module. If the software If the error rate of a unit exceeds a certain threshold, throw it away and find another developer to rewrite the code. If the code you are writing shows some error-prone tendencies, abandon it , because if there are mistakes in the early stage, it means that mistakes will continue in the future.” As the complexity of software increases rapidly, it is impossible to examine it in detail at the last minute.Because they are discontinuous systems, there will always be some weird cases hidden or some kind of fatal response-the probability of its activation may be only one in a million, whether it is systematic testing or sampling testing. Unable to discover them.Also, while statistical sampling can tell us whether something is likely to go wrong, it cannot pinpoint where it went wrong. The new biology's solution is to build programs out of working units, testing and correcting them along the way.However, we still face the problem that although each unit has no loopholes, unexpected "emergent behaviors" (that is, loopholes) still occur during the construction process.However, all you have to do now is to test at a higher level (because the underlying unit has been proven to be good), so there is hope for "zero defects"-this is better than dealing with emergencies at the same time. The situation with problems and deeply buried problems is much better. Ted made a living by inventing new software languages.He is a pioneer of object-oriented programming languages, the writer of Small Talk and Hyper Card, and is now developing a "direct manipulation" (direct manipulation) language for Apple computers.When I asked about Apple's zero-defect software, he said: "I think it is possible to achieve zero defects in productized software, such as another database software you are writing. As long as you really understand Whatever you're doing, you can do it without any mistakes." Ted would never have gotten along with the kind of software workshops in Japan.He said, "A good programmer can rewrite any known, regular piece of software, subtly reducing the code. But in creative programming, nothing is fully understood. You have to Writing stuff that you don't even understand...well, yes, you can write zero-defect software, but it's going to be thousands of lines more than it needs to be." So is nature: it trades reliability for simplicity.The degree to which neuronal circuits exist in nature is non-optimized, which has always amazed scientists.Scientists studying nerve cells in the crayfish tail have revealed just how shockingly bloated and ugly this circuit is.With a little work, they could design a much more compact structure.But while a crayfish's tail circuit has more redundancy than it really needs, it can't go wrong. The price of zero-defect software is that it's "overengineered," overbuilt, somewhat bloated—never on the edge of the unknown that Ted and his friends so often linger.It trades execution efficiency for production efficiency. I once asked Nobel laureate Herbert Simon how to make this zero-defect philosophy compatible with his concept of "satisfaction" - not seeking the best, but seeking good enough.He laughs and says, "Oh, you can go and make a product with zero defects. But the question is can you do it in a profitable way? Defect concepts are satisfactorily processed.” Oh, that complexity compromise problem again. The future of the networked economy lies in designing reliable processes, not reliable products.At the same time, the nature of this economy means that this process is impossible to optimize.In a distributed, semi-living world, all of our goals can be "satisfied" only for a brief moment.Maybe one day later the whole situation will completely change, as the saying goes, "I'm going to come on stage after you sing."
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book