Feedback

Benchmark Ethics: Is Technology (ever) Evil?

In the October 19, 1999, issue of InfoWorld, editor Sandy Reed discussed our Institute for Business, Technology, and Ethics. She did a nice job and we appreciate both the encouragement and the publicity.

However, she began her column by saying that Al Erisman comes from the side that says that “technology is good” and that I come from the side that says that “technology is evil.” There is an element of truth to this way of representing Al and me and how we got into this project together; but it is misleading if we leave it at that.

I can’t imagine ever saying simply that “technology is evil.” What I oppose is “technopoly” or “technologism” — the unquestioned dominance and centrality of technology in human life. One way to put it is that technology is great in the tool box of life, but terrible on the throne of life.

Ethics is the study of matters of good and evil (or “bad”) and right and wrong. What qualifies something as ethically/morally “bad” or “evil”? It is that something is actually or potentially harmful to human life. Even if your ethics and morality are based on faith and religion, the reason why something is morally prohibited (by God or religion) is because it is harmful to human life.

Is technology (ever) actually or potentially harmful to human life? That is a question that interests me. Some of you will immediately think “If you put the question that way, then almost anything is potentially bad or evil!” Exactly! (The flip side is also important: almost anything is potentially beneficial to human life in some way, and thus potentially “good”). The moral life consists of wrestling with such questions, not in mindless conformity to some simplistic rules.

We are used to thinking about the ethics of technology in a simpler era, when technology was a matter of simple “tools.” Technology, we think, is like a hammer. Its moral evaluation depends entirely on how it is used by people. Pounding nails to build a house is good; hitting your opponent over the head is bad. Morality is a matter of the intentions and actions of individuals; the instruments are morally neutral. Many today use that same logic to assess computers, genetic engineering, or any other technology.

This approach was a little naive even in the case of a hammer (someone once said “to a man with a hammer, everything looks like a nail” — i.e., the design of a hammer is already embedded with intentions and possible beneficial and harmful uses). But it is radically naive in the case of more complex technologies like automobiles or computers and networks.

Automobiles and computers are not merely “neutral” tools depending on whether their users are good or bad people. They bring both good and bad impacts into peoples’ lives. Technology is not good or evil; it is good and evil. It is beneficial to human life in certain respects and harmful in others. Technology does not come into empty spaces in human life (there are none); it comes into spaces already occupied by other things; technologies replace things (sometimes earlier, less desirable technologies; sometimes conversation; etc.).

The question is “what is the cost” of this new technology? What will it replace? What will it require in the future? What positive and negative uses will its presence incline and empower me to pursue? What are its “side effects?”

The primary values that are characteristic of technological thinking are “change,” “power,” “speed,” “rationality,” “measurability,” and “efficiency.” Judged only by its own internal values, “good” technologies are ones that change our world by increasing speed, power, and efficiency in ways that are rational and measurable (quantifiable).

My argument is, first, that all technological developments “bite back” and have hidden costs which we must attend to, and, second, that the core, internal values of technology are inadequate as a general philosophy of life (what I would call “technologism”).

So technology is not simply evil, but it is a terrible mistake if we give all technology a free “pass” on ethics. Sometimes such critical questioning will lead us to create compensations and defenses around certain technologies; other times it will send us back to the drawing board to develop better technologies with fewer negative features; but sometimes, I believe, it may lead us to say “no” to technology and choose a non-technological, inefficient, weak, slow, irrational, immeasurable, unchanged mode of existence and relationship. And that, on occasion, is good.

David W. Gill was co-founder of IBTE and author of Benchmark Ethics, a regular article in the first 32 issues of Ethix. After eight years of writing, speaking, teaching, and consulting in the Bay Area of California, he joined the faculty of Gordon-Conwell Theological Center (South Hampton, Mass.) in 2010, where he is also Director of the Mockler Center for Faith and Ethics in the Workplace.

Share Your Thoughts