Posts Tagged ‘artificial intelligence’
June 17, 2014 | by Brian Christian
Living with the Turing test.
As of last week, the Turing test has—allegedly—been passed. In 1950, Alan Turing famously predicted that in the early twenty-first century, computer programs capable of sending and receiving text messages would be able to fool human judges into mistaking them for humans 30 percent of the time, and that we would come to “speak of machines thinking without expecting to be contradicted.” Two weekends ago, at a Turing test competition held at the Royal Society in London, a piece of so-called “chatbot” software called “Eugene Goostman” crossed that mark, fooling ten of the thirty human judges who spoke with it.
The official press release described this as a “milestone in computing history”—a “historic event.” Was it? We should not, of course, take a press release’s word for it. (Said release describes the winning chatbot program as a “supercomputer,” a head-scratching conflation of hardware with software.)
The release says this is the first time a computer program has scored above 30 percent in an “unrestricted” Turing test. This appears to be plausibly true. We don’t have access to the transcripts of these conversations—the organizers declined my request—but we know that the persona adopted by the winning chatbot (“Eugene Goostman”) was that of a thirteen-year-old, non-native-speaking foreigner. The Turing tests of the 1990s were restricted by topics, with the judge’s questions limited to a single domain. Here, the place of those constraints has been taken by restricted fluency: both linguistic and cultural. From correspondence with the contest organizers, I learned that the human judges were themselves chosen to include children and nonnative speakers. So we might fairly argue about what, for a Turing test, truly counts. These questions are deeper than they seem. Read More »
March 28, 2012 | by Lincoln Michel
In September 2006, the World Chess Championship devolved into a debate about bathrooms. One champion, Veselin Topalov, accused the other, Vladimir Kramnik, of excessive urination, hinting that Kramnik was retreating to the unmonitored bathroom to receive smuggled computer assistance. (Kramnik responded that he merely drank a lot of water.) Kramnik was eventually declared the victor, but to many, the episode displayed the sad state that the grand game had fallen into since Garry Kasparov lost to IBM’s Deep Blue in 1997. Back then, Kasparov was bitter about the loss and accused IBM of cheating—with human intervention, saying that he saw uncanny human intelligence in the computer’s moves.
Even that incident, though, was not the first time the line between man and machine had been blurred in the game. The first machine to awe humanity with its chess mastery was the eighteenth-century life-size automaton known as the Turk. Constructed in 1770 by Wolfgang von Kempelen to impress Empress Maria Theresa, the Turk appeared as a wooden Oriental sorcerer seated at a large cabinet. Before playing commenced, Kempelen would open the cabinet doors to reveal the clockwork machinery that controlled the Turk. The audience could see that there was nothing else inside. After the doors were closed and a challenger seated, the Turk would come eerily to life. He would move the pieces robotically, but shake his head or tap his hand in human displays of annoyance or pride. He also nearly always won.
The Turk became a spectacular attraction, thrilling, baffling, and terrifying viewers across Europe and America for decades. Read More »
March 14, 2011 | by Eric Chinski
Brian Christian, who studied computer science, philosophy, and poetry, has just published his first book, The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive. Recently, he answered my questions about the Turing test, online romance, and conversation fillers.
Your new book has an odd but intriguing title: The Most Human Human. Can you explain what it means?
The Most Human Human is an award given out each year at the Loebner Prize, the artificial intelligence (AI) community’s most controversial and anticipated annual competition. The event is what’s called a Turing test, in which a panel of judges conducts a series of five-minute-long chat conversations over a computer with a series of real people and with a series of computer programs pretending to be people by mimicking human responses. The catch, of course, is that the judges don’t know at the start who’s who, and it’s their job in five minutes of conversation to try to find out.
Each year, the program that does the best job of persuading the judges that it’s human wins the Most Human Computer award and a small research grant for its programmers. But there’s also an award, strangely enough, for the human who does the best job of swaying the judges: the Most Human Human award.
British mathematician Alan Turing famously predicted in 1950 that computers would be passing the Turing test—that is, consistently fooling judges at least 30 percent of the time and as a result, generally considered to be intelligent in the human sense—by the year 2000. This prediction didn’t come to pass, but I was riveted to read that, in 2008, the computers came up shy of that mark by just a single vote. I decided to call up the test’s organizers and get involved in the 2009 contest as one of the human “confederates”—which meant I was both a part of the human “defense,” trying to prevent the machines from passing the test, and also vying with my fellow confederates for that intriguing Most Human Human award. The book tells the story of my attempt to prepare, as well as I could, for that role: What exactly does it mean to “act as human as possible” in a Turing test? And what does it mean in life?