Posts Tagged ‘computers’
January 9, 2015 | by Jason Z. Resnikoff
Watching the sixties and seventies through 2001 and Alien.
It was April 1968 and my father was sitting in a theater in Times Square watching 2001: A Space Odyssey, certain that what he was seeing wasn’t just a movie but the future. When it ended, he got up and walked out into Times Square, with its peep-show glitz and sleazy, flashing advertisements; he found the uptown subway beneath the yellow marquees for dirty movies like The Filthy 5; and through all of it, he thought that when humanity hurls itself into the depths of the cosmos, this is how we will do it. In the film’s iconic final shot, the space baby looks down at the planet to which it is no longer bound. Freedom, this shot says, is imminent.
My father was twenty-four then, and perhaps at his most world-historical: he was becoming an expert in computers. He’d worked for IBM in Poughkeepsie, New York, a corporate labyrinth of beige cubicles and epochal breakthroughs; a world of punch cards and reel-to-reel magnetic tape, where at least some of the employees were deadly serious about making sure to wear the company tie clip and then, once they were off duty, to switch to their own personal tie clips.
When 2001 premiered, he was working at Columbia University’s Computer Center, in the academic computing branch. I don’t think it’s unreasonable to say that the movie summed up everything my father was in April 1968. It became something of a talisman for him, a semisacred object invested with all the crazy hopefulness of his youth. For as long as I can remember, my father had talked about 2001. He told me often of HAL, of the monolith of evolution, of how glorious the future would be. Of course, when I finally saw the movie, well after the actual year 2001, it bored me out of my mind. Too slow, too bizarre. Ah, my father told me, that’s because evolution is slow, evolution is bizarre. It wasn’t until much later that I started to understand the movie—and, maybe, to understand my father. Read More »
January 7, 2015 | by Dan Piepenbring
Sixty-one years ago, on January 7, 1954, a massive, terrifying, IBM artificial intelligence—referred to in the press as a “giant brain,” a “robot brain,” and a “polyglot brainchild,” among other wide-eyed terms—translated more than sixty sentences from Russian into English. It was the first public demonstration of machine translation. And yeah, the people were pleased.
The computer was an IBM 701, which was, according to its manufacturer, “the most versatile electronic ‘brain’ extant,” used sixteen hours a day for “nuclear physics, rocket trajectories, weather forecasting, and other mathematical wizardry.” But translating was an entirely different pursuit, and substantially more difficult: in fact, the computer knew only six grammatical rules, and its vocabulary comprised just 250 terms.
Working with Georgetown linguists, and with dozens from the media watching in IBM’s New York headquarters, a woman “who didn't understand a word of the language of the Soviets punched out the Russian messages on IBM cards.” (They used a Romanized version of Russian.) She began with sentences about chemistry, which probably unnerved the newsmen in attendance—how were they supposed to captivate readers with such examples as “The quality of coal is determined by calorie content” and “Starch is produced by mechanical methods from potatoes”? Read More »
December 11, 2014 | by Dan Piepenbring
- Tim Parks was dismayed to find that his students were so enthralled by “the printed word and an aura of literariness” that they’d miss obvious absurdities in what they were reading. His advice? “Always read with a pen in your hands, not beside you on the table, but actually in your hand, ready, armed. And always make three or four comments on every page, at least one critical, even aggressive. Put a question mark by everything you find suspect. Underline anything you really appreciate. Feel free to write ‘splendid,’ but also, ‘I don’t believe a word of it.’ And even ‘bullshit.’ ”
- On a similar note, Oxonians are obsessed with finding marginalia in their library books: on Facebook, the Oxford University Marginalia group “now has two thousand five hundred and three members, making marginalia to Oxford something like what a cappella is to Princeton. ‘The Oxford libraries are still heavily used, and the curriculum remains relatively stable, so you have so many students reading the same texts’ … ‘The books are thrashed, basically.’ ”
- Not many people are managing to slog through literary best sellers, experts say: “A study has shown the most downloaded ebooks of the year were not necessarily ever finished by hopeful readers.” Just 44 percent of readers made it through The Goldfinch, and 28 percent got through Twelve Years a Slave.
- Crummy computer news, part one: they’re better at flirting than we are. “Women were okay, able to judge with 62 percent accuracy when a man was flirting with them. Men were worse, accurately guessing that a woman was flirting just 56 percent of the time. The Stanford guys’ flirtation-detection system, in comparison, was able to correctly judge flirting with 71 percent accuracy.”
- Crummy computer news, part two: all the seemingly horrendous dot-com ideas of the nineties were actually pretty decent. Remember WebVan? No? They wanted to use the Internet to deliver fresh groceries to your door—just as dozens of profitable companies are doing today.
July 22, 2014 | by Sadie Stein
For longer than I care to admit, I have been unable to scroll down on my computer. This is only the latest in a series of laptop-related inconveniences, but, given the nature of my employment, is something of a problem. If you manage to catch the scrolling bar to the far right of the screen, you can generally navigate okay, but if you relax your vigilance for a moment and move your cursor, that option is closed, and it is necessary to refresh the screen. At least, that’s the only way I know how to do it.
I have lived without video and Flash capacity for some five months now, and it has been a rich, full life, but this scrolling situation seems untenable. I am going to have to go to the Genius Bar, a prospect I dread.
It’s not that I mind the trip, the wait, or even the well-deserved condescension of the Geniuses. At least this time around, there is no varicolored crayon wax mysteriously covering my computer, leading me to mumble some absurd, half-formed lie that implied I either had small children or was a preschool teacher. I’m just terrified that they’re going to tell me the computer can’t be saved, that the scrolling is indicative of a more serious—terminal—illness. (From this you might imagine how conscientiously I deal with actual medical issues.) Read More »
June 17, 2014 | by Brian Christian
Living with the Turing test.
As of last week, the Turing test has—allegedly—been passed. In 1950, Alan Turing famously predicted that in the early twenty-first century, computer programs capable of sending and receiving text messages would be able to fool human judges into mistaking them for humans 30 percent of the time, and that we would come to “speak of machines thinking without expecting to be contradicted.” Two weekends ago, at a Turing test competition held at the Royal Society in London, a piece of so-called “chatbot” software called “Eugene Goostman” crossed that mark, fooling ten of the thirty human judges who spoke with it.
The official press release described this as a “milestone in computing history”—a “historic event.” Was it? We should not, of course, take a press release’s word for it. (Said release describes the winning chatbot program as a “supercomputer,” a head-scratching conflation of hardware with software.)
The release says this is the first time a computer program has scored above 30 percent in an “unrestricted” Turing test. This appears to be plausibly true. We don’t have access to the transcripts of these conversations—the organizers declined my request—but we know that the persona adopted by the winning chatbot (“Eugene Goostman”) was that of a thirteen-year-old, non-native-speaking foreigner. The Turing tests of the 1990s were restricted by topics, with the judge’s questions limited to a single domain. Here, the place of those constraints has been taken by restricted fluency: both linguistic and cultural. From correspondence with the contest organizers, I learned that the human judges were themselves chosen to include children and nonnative speakers. So we might fairly argue about what, for a Turing test, truly counts. These questions are deeper than they seem. Read More »
April 25, 2014 | by Dan Piepenbring
- Shakespeare: playwright, poet, armchair astronomer. “Peter Usher has a very elaborate theory about Hamlet, in which the play is seen as an allegory about competing cosmological worldviews … Claudius happens to have the same name as Claudius Ptolemy, the ancient Greek mathematician and astronomer who we now associate most closely with the geo-centric Ptolemaic worldview.”
- From the mideighties: Andy Warhol’s rediscovered computer art.
- New research by the University of California-San Diego’s Rayner Eyetracking Lab—nobody tracks eyes like the Rayner—suggests that speed-reading apps might rob you of your comprehension skills.
- “I have been surreptitiously scrutinizing faces wherever I go. Several things have struck me while undertaking this field research on our species. The first is quite how difficult it is to describe faces … We might say that a mouth is generous, or eyes deep-set, or cheeks acne-scarred, but when set beside the living, breathing, infinitely subtle interplay of inner thought, outward reaction and the nexus of superimposed cultural conventions, it tells us next to nothing about what a person really looks like.”
- In Germany, business is booming. The secret: pessimism. “German executives are almost always less confident in the future than they are in the present.”
- Discovered in an archive of the LAPD: more than a million old crime-scene photographs, some of them more than a century old.