To the limits of thought: human and artificial intelligence

September 18, 2017
FacebookFacebook MessengerTwitterLinkedInWhatsAppEmail

An extract from the book Artificial Intelligence. What everybody needs to know by Jerry Kaplan (© Oxford University Press 2016. Reproduced here under kind permission). The Italian translation of the volume will be published in October 2017 by LUISS University Press with the title Intelligenza artificiale. Uomini, macchine e il futuro del lavoro.

Jerry Kaplan will be at LUISS on tuesday october 10th to discuss artificial intelligences, machine learning, technological singularity and the future of work.

Can a computer “think”?

The noted English mathematician Alan Turing considered this question in a 1950 essay entitled “Computing Machinery and Intelligence.” In it, he proposes, essentially, to put the issue to a vote. Constructing what he calls the “imitation game,” he imagines an interrogator in a separate room, communicating with a man and a woman only through written communication (preferably typed), attempting to guess which interlocutor is the man and which is the woman. The man tries to fool the interrogator into thinking he is the woman, leaving the woman to proclaim her veracity (in vain, as Turing notes) in an attempt to help the interrogator make the correct identifications. Turing then invites the reader to imagine substituting a machine for the man, and a man for the woman. (The imitation game is now widely called the Turing Test.)

Leaving aside the remarkable psychological irony of this famously homosexual scientist tasking the man with convincing the interrogator that he is a woman, not to mention his placing the man in the role of deceiver and the woman as truth teller, he goes on to ask whether it’s plausible that the machine could ever win this game against a man. (That is, the machine is tasked with fooling the interrogator into thinking it is the man, while the man is telling the truth about who he is.) Contrary to the widely held belief that Turing was proposing an “entrance exam” to determine whether machines had come of age and become intelligent, he was actually speculating that our common use of the term think would eventually stretch sufficiently to be appropriately applied to certain machines or programs of adequate capability. His estimate of when this might occur was the end of the twentieth century, a remarkably accurate guess considering that we now routinely refer to computers as “thinking,” mostly when we are waiting impatiently for them to respond. In his words, “The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines think ing without expecting to be contradicted.”

Is Turing right? Is this question too meaningless to deserve discussion? (And thus, by implication, this discussion is a waste of time?) Obviously, it depends on what we mean by “think” […].

Some critics of AI, most notably John Searle, professor of philosophy at the University of California at Berkeley, rightfully observe that computers, by themselves, can’t “think” in this sense at all, since they don’t actually mean or do anything—at best, they manipulate symbols. We’re the ones associating their computations with the external world. But Searle goes further. He points out that even saying that computers are manipulating symbols is a stretch. Electrons may be floating around in circuits, but we are the ones interpreting this activity as symbol manipulation […].

Despite the ongoing efforts of generations of AI researchers to explain away Searle’s observations, in my opinion his basic point is right. Computer programs, taken by themselves, don’t really square with our commonsense intuition about what it means to think. They are “simply” carrying out logical, deterministic sequences of actions, no matter how complex, changing their internal configurations from one state to another. But here’s where we get into trouble: if you believe that our brains are little more than symbol manipulators composed of biological material, then you are naturally forced to conclude that your brain, by itself, can’t think either. Disconnect it from the outside world, and it would be doing just what a computer does. But that doesn’t square with our commonsense intuition that even if we sit in a dark, quiet room, deprived of all input and output, we can still sit there and think. We can’t have it both ways: if symbol manipulation is the basis of intelligence, either both people and machines can think (in principle, if not in practice today), or neither can.