Artificial intelligence vs. human stupidity

If you exchanged text messages with both a human and a machine, would you know which is which?

You may think you would. But with computers getting increasingly sophisticated, are we really that far from creating a machine so intelligent that can consistently fool you into thinking it is a he or a she?

In 1950, the British mathematician Alan Turing proposed one of his most influential ideas, the “imitation game”, know today as the Turing test. While various versions appeared over the years, the essence remained the same. An “interrogator” (human), who communicates by text with two players, a human and a machine, is challenged to identify which is which. The machine wins the game if the interrogator guesses incorrectly.

Turing_Test_version_3.png

Standard Turing test: if interrogator C identifies A as human and B as machine, the computer passes the test. Source: Wikimedia Commons.

The question Turing considered initially was that of “machine thought”: can machines think? And if so, how can we tell that they are thinking, that they have some sort of intelligence? But rather than meandering into philosophical thought experiments, Turing proposed his test as a practical way of evaluating the intelligence of a machine. “If a machine acts as intelligently as a human being, then it is as intelligent as a human being,” he stated.

As an exceptional scientist who made great progress in computing, Turing believed that it wouldn’t take more than a few decades to program a machine to “think”. He predicted that in 50 years, “it will be possible to program computers (…) to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”

Brilliant as he was, Turing was wrong. By the year 2000, no interrogator had yet been fooled by a machine after five minutes of instant messaging. And to date, no computer has yet been able to convincingly pass the Turing test. But machines are getting increasingly better at the imitation game.

Every year, “chatterbots” or “chatbots” — computer programs that attempt to simulate an intelligent conversation — are put to the test in the Loebner Prize competition. The contest, designed to implement a modern variant of the Turing test, awards a cash prize and a bronze medal to the “Most Human Computer”. The silver medal, which has never been awarded, is reserved to the machine who passes the test. (The gold medal awardee must win an even more sophisticated version of the imitation game, it “must respond indistinguishably from a human when presented with text, audio and video input,” according to the official rules.)

In the 2008 version of the Loebner Prize competition, the variant of the Turing test implemented was similar to the original game. Each interrogator had text-based parallel conversations with two entities for five minutes. They then had to guess which of the two was the machine and which was the human. Incredibly, a chatterbot named Elbot, that took home the bronze medal, managed to fool three of the twelve interrogators — one short of Turing’s 30% mark — into thinking it was the human.

The results from the 2010 competition were even more remarkable. As in 2008, the interrogators compared text-based chats of humans and machines; but this time, the conversations lasted five times longer than previously. You would think that no machine would be able to chat like a human for twenty five minutes. Yet, a chatterbot named Suzette was able to do just that — it fooled one out of four interrogators, also one shy of passing the Turing test according to the 2010 rules.

(The reason why the rules of the game change from year to year is related to the fact that Turing did not specify many of its key aspects. After his initial test proposal, he went on to present two more versions, distinct from the original. Therefore, details such as duration or sophistication of the interrogation can be specified at will. The ultimate version seems to be that stated in a bet between two entrepreneurs, Mitchell Kapor, the predictor, and Ray Kurzweil, the challenger, that by 2029 no computer will have passed the Turing test.)

The 2008 and 2010 variants of the test involved parallel-paired conversations. Was it the machine that was intelligent enough to trick the interrogator into thinking it was a person, or was it the human who was simpleminded to the extent of being unable to show its humanness? Perhaps the machines aren’t getting better at chatting like humans; we are getting worse.

Aside from awarding the “Most Human Computer”, the Loebner Prize competition also gives a prize to the “Most Human Human”, reserved to the person who offers less doubts to the interrogators that he or she is in fact human. In 2009, a year when no machine managed to fool the interrogators, this award went to author Brian Christian.

In a recent Guardian article, a propos of his new book, he states that “Turing proposed his test as a way to measure the progress of technology, but it just as easily presents us with a way to measure our own.” Christian continues by citing philosopher John Lucas who argues that if computers win the imitation game it will be “not because machines are so intelligent, but because humans, many of them at least, are so wooden.”

Christian did not take the Turing test simply for the sake of it; he set himself out to win the “Most Human Human” award. The months preceding the competition were “”http://www.guardian.co.uk/technology/2011/apr/30/computers-v-humans-loebner-artificial-intelligence?INTCMP=SRCH">of preparation, of interviews and rumination and research“. As described in the ”http://www.nytimes.com/2011/03/20/books/review/book-review-the-most-human-human-by-brian-christian.html">NY Times review of his book, Christian had “to figure out not just why Elbot won [in 2008], but why humanity lost.”

It might be unsettling to think that, to best show his humaness, he had to seriously research both modern human communication and machine-human dialogues. But Christian takes an optimistic conclusion from his odyssey. We “”http://www.guardian.co.uk/technology/2011/apr/30/computers-v-humans-loebner-artificial-intelligence?INTCMP=SRCH">the most adaptive, flexible, innovative and quick-learning species on the planet" are able to gain the skills needed to outdo machines if we so wish.

Still, it is ironic that by learning how computers communicate we can, by doing differently, better communicate as humans. Intelligent or not, the machines have the means to teach us what humanity is.

Advertisements

7 thoughts on “Artificial intelligence vs. human stupidity

  1. Laura Wheeler

     Hi Barbara,
    Thanks for a thoughtful post.  I am actually quite surprised Turing was wrong and that to date no interrogator has been fooled by a machine. Is this the same for social networking messages?  Such as tweets?  I know that recently an experiment was carried out on Twitter to see if a bot could gain more followers in one day than a real life person…(must find some links for you.)
    There have also been several experiments bot vs. human playing video games, where even skilled bots can not beat a skilled human player.
    For now I think it is comforting that computers are not (yet) more intelligent than us, although it is only a waiting game…. 

    Like

  2. Barbara Ferreira

    Yes, I read about that Twitter thing too, but can’t remember where.
    As for Turing’s prediction, what people say is that perhaps he would have been right if he hadn’t died (by his own hand) a few years later. He was so brilliant that the field of artificial intelligent would probably be more advanced know if he had kept making contributions to it.
    The other thing about the test is that, like I wrote in the post, it can be altered because Turing didn’t specify some of its key parameters. For example, the 2009 and 2010 versions of the Loebner Prize competition were actually more challenging for the machines than the 2008 one.
    I look forward to see who wins the long bet, though.

    Like

  3. Lou Woodley

    Nice post, Barbara. The study on Twitter bots you and Laura are discussing was covered here.
    I think the credibility of bots is always going to be an entertaining/intriguing subject. I remember spending (wasting?) an evening trying to outsmart the Ikea online assistant only to discover that she had a credible answer to all of my comments from technical product questions to queries about her uniform! 

    Like

  4. Laura Wheeler

    Thanks Lou for the link. (Haha to your IKEA game…!!) 
    SO it is clear from their experiment that bots were able to heavily shape and change the structure of the network they were infiltrating. Their success, having been measured by their human responses and new followers on Twitter, suggests that bots can manipulate social networks.  If they were able to step this up a level, who knows the kind of influences they could have….
    Like you Barbara I know where my bet is being placed….

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s