Technologies Always Have Side Effects
In addition to its intended benefits, every design is likely to have unintended side effects in its production and application. On the one hand, there may be unexpected benefits. For example, working conditions may become safer when materials are molded rather than stamped, and materials designed for space satellites may prove useful in consumer products. On the other hand, substances or processes involved in production may harm production workers or the public in general; for example, sitting in front of a computer may strain the user's eyes and lead to isolation from other workers. And jobs may be affected—by increasing employment for people involved in the new technology, decreasing employment for others involved in the old technology, and changing the nature of the work people must do in their jobs.
It is not only large technologies—nuclear reactors or agriculture—that are prone to side effects, but also the small, everyday ones. The effects of ordinary technologies may be individually small but collectively significant. Refrigerators, for example, have had a predictably favorable impact on diet and on food distribution systems. Because there are so many refrigerators, however, the tiny leakage of a gas used in their cooling systems may have substantial adverse effects on the earth's atmosphere.
Some side effects are unexpected because of a lack of interest or resources to predict them. But many are not predictable even in principle because of the sheer complexity of technological systems and the inventiveness of people in finding new applications. Some unexpected side effects may turn out to be ethically, aesthetically, or economically unacceptable to a substantial fraction of the population, resulting in conflict between groups in the community. To minimize such side effects, planners are turning to systematic risk analysis. For example, many communities require by law that environmental impact studies be made before they will consider giving approval for the introduction of a new hospital, factory, highway, waste-disposal system, shopping mall, or other structure.
Risk analysis, however, can be complicated. Because the risk associated with a particular course of action can never be reduced to zero, acceptability may have to be determined by comparison to the risks of alternative courses of action, or to other, more familiar risks. People's psychological reactions to risk do not necessarily match straightforward mathematical models of benefits and costs. People tend to perceive a risk as higher if they have no control over it (smog versus smoking) or if the bad events tend to come in dreadful peaks (many deaths at once in an airplane crash versus only a few at a time in car crashes). Personal interpretation of risks can be strongly influenced by how the risk is stated—for example, comparing the probability of dying versus the probability of surviving, the dreaded risks versus the readily acceptable risks, the total costs versus the costs per person per day, or the actual number of people affected versus the proportion of affected people.
All Technological Systems Can Fail
Most modern technological systems, from transistor radios to airliners, have been engineered and produced to be remarkably reliable. Failure is rare enough to be surprising. Yet the larger and more complex a system is, the more ways there are in which it can go wrong—and the more widespread the possible effects of failure. A system or device may fail for different reasons: because some part fails, because some part is not well matched to some other, or because the design of the system is not adequate for all the conditions under which it is used. One hedge against failure is overdesign—that is, for example, making something stronger or bigger than is likely to be necessary. Another hedge is redundancy—that is, building in one backup system or more to take over in case the primary one fails.
If failure of a system would have very costly consequences, the system may be designed so that its most likely way of failing would do the least harm. Examples of such "fail-safe" designs are bombs that cannot explode when the fuse malfunctions; automobile windows that shatter into blunt, connected chunks rather than into sharp, flying fragments; and a legal system in which uncertainty leads to acquittal rather than conviction. Other means of reducing the likelihood of failure include improving the design by collecting more data, accommodating more variables, building more realistic working models, running computer simulations of the design longer, imposing tighter quality control, and building in controls to sense and correct problems as they develop.
All of the means of preventing or minimizing failure are likely to increase cost. But no matter what precautions are taken or resources invested, risk of technological failure can never be reduced to zero. Analysis of risk, therefore, involves estimating a probability of occurrence for every undesirable outcome that can be foreseen—and also estimating a measure of the harm that would be done if it did occur. The expected importance of each risk is then estimated by combining its probability and its measure of harm. The relative risk of different designs can then be compared in terms of the combined probable harm resulting from each.
0 comments:
Post a Comment