This week I’m re-reading Computer Power and Human Reason: From Judgment to Calculation(1976) by the late MIT computer scientist Joseph Weizenbaum. Weizenbaum worked in artificial intelligence, and he is known for inventing the first widely-known chatterbot ELIZA. His ELIZA computer program was a kind of simple artificial conversation software. It was designed to defeat Alan Turing’s famous test to distinguish computers from human beings, and by doing so to demonstrate that the “Turing Test” was a bad idea. The book is intended as a lively contribution to a debate about computing and artificial intelligence that has now passed by: Weizenbaum seriously debates whether or not computers will “serve as psychotherapists” in the future. This feels quaint and strange.
Nonetheless, he was deeply interested in the ideas of “intelligence,” “humanity,” and “the computer,” and some of his writing about the fundamental concepts surrounding computers holds up surprisingly well. He delved deeply into discussion about what it means when a computer program is said to have “an error” or “intends” something, the different avenues for handling complexity in computational systems, and the harmful way that computers are cloaked by scientism. He sees the problem of translating a representation of human action into machine code as something that is fundamentally a philosophical problem of great significance, reflecting back on what it means to act, or to know.
It is interesting to me that later in his life Weizenbaum sought intellectual refuge in the humanities. He wrote that computer scientists were subject to a “temptation to arrogance” because they worked in a domain that was less ambiguous than some other domains. He felt that this caused computer scientists to disparage ambiguity. But for Weizenbaum when something was made less ambiguous this meant that it was, like a “computer language, less expressive of reality.”
-Christian Sandvig 2014-15 Steelcase Research Professor and associate professor of information; 5/15/2015