Posts Tagged ‘Alan Turing’
September 28, 2016 | by Dan Piepenbring
- Mike Davis was an artist, and the irate company-wide memorandum was his canvas. Few in the history of humankind have recognized the savage beauty of this lowliest of media. But Davis—the erstwhile head of Tiger Oil Company, and now dead at eighty-five—shattered the limits of the form with routine ease, showing us just how big an asshole one man could be to his employees. Consider his memos a spin-off of the Theater of Cruelty: “ ‘There will be no more birthday celebrations, birthday cakes, levity or celebrations of any kind within the office,’ the boss wrote on Feb. 8, 1978. ‘This is a business office. If you have to celebrate, do it after office hours on your own time.’ … ‘Do not speak to me when you see me,’ the man had ordered in a memo the month before. ‘If I want to speak to you, I will do so. I want to save my throat. I don’t want to ruin it by saying hello to all of you.’ ”
- It’s hard enough to get a human being to pay to read your book. Now robots are refusing to pony up, too. Google has just “fed” some eleven thousand books to its artificial intelligence, hoping to teach it how to talk like a real boy. But even though they’re rolling in the dough, Google didn’t pay any of the authors of these books, Richard Lea writes: “After feeding these books into a neural network, the system was able to generate fluent, natural-sounding sentences. According to a Google spokesman—who didn’t want to be named—products such as the Google app will be ‘much more useful if they can capture the nuance of language better’ … ‘The research in question uses these novels for the exact purpose intended by their authors—to be read,’ [Authors Guild executive director Mary Rasenberger] argues. ‘It shouldn’t matter whether it’s a machine or a human doing the copying and reading, especially when behind the machine stands a multibillion dollar corporation which has time and again bent over backwards devising ways to monetize creative content without compensating the creators of that content.’ ”
June 17, 2014 | by Brian Christian
Living with the Turing test.
As of last week, the Turing test has—allegedly—been passed. In 1950, Alan Turing famously predicted that in the early twenty-first century, computer programs capable of sending and receiving text messages would be able to fool human judges into mistaking them for humans 30 percent of the time, and that we would come to “speak of machines thinking without expecting to be contradicted.” Two weekends ago, at a Turing test competition held at the Royal Society in London, a piece of so-called “chatbot” software called “Eugene Goostman” crossed that mark, fooling ten of the thirty human judges who spoke with it.
The official press release described this as a “milestone in computing history”—a “historic event.” Was it? We should not, of course, take a press release’s word for it. (Said release describes the winning chatbot program as a “supercomputer,” a head-scratching conflation of hardware with software.)
The release says this is the first time a computer program has scored above 30 percent in an “unrestricted” Turing test. This appears to be plausibly true. We don’t have access to the transcripts of these conversations—the organizers declined my request—but we know that the persona adopted by the winning chatbot (“Eugene Goostman”) was that of a thirteen-year-old, non-native-speaking foreigner. The Turing tests of the 1990s were restricted by topics, with the judge’s questions limited to a single domain. Here, the place of those constraints has been taken by restricted fluency: both linguistic and cultural. From correspondence with the contest organizers, I learned that the human judges were themselves chosen to include children and nonnative speakers. So we might fairly argue about what, for a Turing test, truly counts. These questions are deeper than they seem. Read More »