If you exchanged text messages with both a human and a machine, would you know which is which?
You may think you would. But with computers getting increasingly sophisticated, are we really that far from creating a machine so intelligent that can consistently fool you into thinking it is a he or a she?
In 1950, the British mathematician Alan Turing proposed one of his most influential ideas, the “imitation game”, know today as the Turing test. While various versions appeared over the years, the essence remained the same. An “interrogator” (human), who communicates by text with two players, a human and a machine, is challenged to identify which is which. The machine wins the game if the interrogator guesses incorrectly.
The question Turing considered initially was that of “machine thought”: can machines think? And if so, how can we tell that they are thinking, that they have some sort of intelligence? But rather than meandering into philosophical thought experiments, Turing proposed his test as a practical way of evaluating the intelligence of a machine. “If a machine acts as intelligently as a human being, then it is as intelligent as a human being,” he stated.
As an exceptional scientist who made great progress in computing, Turing believed that it wouldn’t take more than a few decades to program a machine to “think”. He predicted that in 50 years, “it will be possible to program computers (…) to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”
Brilliant as he was, Turing was wrong. By the year 2000, no interrogator had yet been fooled by a machine after five minutes of instant messaging. And to date, no computer has yet been able to convincingly pass the Turing test. But machines are getting increasingly better at the imitation game.
Every year, “chatterbots” or “chatbots” — computer programs that attempt to simulate an intelligent conversation — are put to the test in the Loebner Prize competition. The contest, designed to implement a modern variant of the Turing test, awards a cash prize and a bronze medal to the “Most Human Computer”. The silver medal, which has never been awarded, is reserved to the machine who passes the test. (The gold medal awardee must win an even more sophisticated version of the imitation game, it “must respond indistinguishably from a human when presented with text, audio and video input,” according to the official rules.)
In the 2008 version of the Loebner Prize competition, the variant of the Turing test implemented was similar to the original game. Each interrogator had text-based parallel conversations with two entities for five minutes. They then had to guess which of the two was the machine and which was the human. Incredibly, a chatterbot named Elbot, that took home the bronze medal, managed to fool three of the twelve interrogators — one short of Turing’s 30% mark — into thinking it was the human.
The results from the 2010 competition were even more remarkable. As in 2008, the interrogators compared text-based chats of humans and machines; but this time, the conversations lasted five times longer than previously. You would think that no machine would be able to chat like a human for twenty five minutes. Yet, a chatterbot named Suzette was able to do just that — it fooled one out of four interrogators, also one shy of passing the Turing test according to the 2010 rules.
(The reason why the rules of the game change from year to year is related to the fact that Turing did not specify many of its key aspects. After his initial test proposal, he went on to present two more versions, distinct from the original. Therefore, details such as duration or sophistication of the interrogation can be specified at will. The ultimate version seems to be that stated in a bet between two entrepreneurs, Mitchell Kapor, the predictor, and Ray Kurzweil, the challenger, that by 2029 no computer will have passed the Turing test.)
The 2008 and 2010 variants of the test involved parallel-paired conversations. Was it the machine that was intelligent enough to trick the interrogator into thinking it was a person, or was it the human who was simpleminded to the extent of being unable to show its humanness? Perhaps the machines aren’t getting better at chatting like humans; we are getting worse.
Aside from awarding the “Most Human Computer”, the Loebner Prize competition also gives a prize to the “Most Human Human”, reserved to the person who offers less doubts to the interrogators that he or she is in fact human. In 2009, a year when no machine managed to fool the interrogators, this award went to author Brian Christian.
In a recent Guardian article, a propos of his new book, he states that “Turing proposed his test as a way to measure the progress of technology, but it just as easily presents us with a way to measure our own.” Christian continues by citing philosopher John Lucas who argues that if computers win the imitation game it will be “not because machines are so intelligent, but because humans, many of them at least, are so wooden.”
Christian did not take the Turing test simply for the sake of it; he set himself out to win the “Most Human Human” award. The months preceding the competition were “”http://www.guardian.co.uk/technology/2011/apr/30/computers-v-humans-loebner-artificial-intelligence?INTCMP=SRCH">of preparation, of interviews and rumination and research“. As described in the ”http://www.nytimes.com/2011/03/20/books/review/book-review-the-most-human-human-by-brian-christian.html">NY Times review of his book, Christian had “to figure out not just why Elbot won [in 2008], but why humanity lost.”
It might be unsettling to think that, to best show his humaness, he had to seriously research both modern human communication and machine-human dialogues. But Christian takes an optimistic conclusion from his odyssey. We “”http://www.guardian.co.uk/technology/2011/apr/30/computers-v-humans-loebner-artificial-intelligence?INTCMP=SRCH">the most adaptive, flexible, innovative and quick-learning species on the planet" are able to gain the skills needed to outdo machines if we so wish.
Still, it is ironic that by learning how computers communicate we can, by doing differently, better communicate as humans. Intelligent or not, the machines have the means to teach us what humanity is.