The general layout of the IBM 701. Photo via Computer History
Sixty-one years ago, on January 7, 1954, a massive, terrifying, IBM artificial intelligence—referred to in the press as a “giant brain,” a “robot brain,” and a “polyglot brainchild,” among other wide-eyed terms—translated more than sixty sentences from Russian into English. It was the first public demonstration of machine translation. And yeah, the people were pleased.
The computer was an IBM 701, which was, according to its manufacturer, “the most versatile electronic ‘brain’ extant,” used sixteen hours a day for “nuclear physics, rocket trajectories, weather forecasting, and other mathematical wizardry.” But translating was an entirely different pursuit, and substantially more difficult: in fact, the computer knew only six grammatical rules, and its vocabulary comprised just 250 terms.
Working with Georgetown linguists, and with dozens from the media watching in IBM’s New York headquarters, a woman “who didn’t understand a word of the language of the Soviets punched out the Russian messages on IBM cards.” (They used a Romanized version of Russian.) She began with sentences about chemistry, which probably unnerved the newsmen in attendance—how were they supposed to captivate readers with such examples as “The quality of coal is determined by calorie content” and “Starch is produced by mechanical methods from potatoes”?
But then she moved onto more accessible, “general interest” fare, giving the machine what the Christian Science Monitor called “a real workout.” It translated “Vladyimir yavlyayetsya na rabotu pozdno utrom” into “Vladimir appears for work late in the morning.” From “Mi pyeryedayem mislyi posryedstvom ryechi” it derived “We transmit thoughts by means of speech.” You know, normal everyday sentences like you or I might use.
A flowchart of part of the IBM‘s dictionary lookup procedures.
“The ‘brain’,” the Monitor’s correspondent wrote, “didn’t even strain its superlative versatility and flicked out its interpretation with a nonchalant attitude of assumed intellectual achievement.”
There was some subterfuge here—obviously six grammatical laws and 250 words do not a language make—and in retrospect, the experiment created a long-lasting misconception: the populace came to feel that machine translation was much more advanced than it really was. The effortlessness of, say, Google Translate was still a glimmer in some engineer’s eye. Nevertheless, you get a sense, from the breathless descriptions in newspapers, that this was a watershed moment in futurology. Computers were so new at the time that they were almost always described using fanciful analogies to the human brain; those who knew about them at all tended to associate them with numerical work, not with language. To see a machine trafficking in human communication would have been astonishing, if not outright frightening.
If you’re curious, there’s an excellent research paper about the demonstration by John Hutchins, and IBM has archived its original press release online. “Electronic translation will begin with separate dictionaries for each technical area,” it predicted: “As experience with them grows, enough will be learned to permit accurate translation of our common everyday language, in which are such illogical and unpredictable words as ‘charley horse.’ ”
Dan Piepenbring is the web editor of The Paris Review.
Last / Next Article
Share