May 11, 2016 | by Matthew G. Kirschenbaum
Picturing the literary history of word processing.
When did individual writers begin to use word processors? As I began work on a literary history of word processing, I found it difficult to establish a time line. Sometimes writers kept a sales record—a word processor or computer would have represented a significant investment, especially back in the day. Other times, as with Stanley Elkin or Isaac Asimov, the arrival of the computer was of such seismic importance as to justify its own literary retellings. But most of the time there were no real records documenting exactly when a writer had gotten his or her first computer, and so I had to rely on anecdote, detective work, and circumstantial evidence.
August 12, 2015 | by M. G. Zimeta
Google, Alphabet, and Machiavelli.
Yesterday, Google submitted an SEC filing announcing a major restructure. Larry Page, the cofounder and CEO of Google Inc, will become the CEO of a new corporation called Alphabet Inc; his fellow cofounder Sergey Brin will become Alphabet's President. As an Alphabet subsidiary, Google will be responsible for around ninety percent of the umbrella company’s revenues; business analysts have praised the restructure for introducing financial transparency. In practical terms, Google will continue to operate as Google—business as usual for ordinary users.
So far, the announcement of the Google’s reinvention has prompted many ordinary users to compare it to the One Ring, Skynet, the Weyland-Yutani Corporation, and Canary M Burns. I have not yet seen it hailed as the return of King Arthur, nor heralded as the advent of the Kingdom of Heaven, Shangri-La, Vaikuntha, or the New Galactic Republic. Perhaps it’s just me; perhaps I should read more, or develop a wider circle of friends. Or perhaps the eschatologists on social media tend to be the paranoid pessimistic ones, and all around the world there are non-Google employees tearfully, joyfully, celebrating the coming decades of peace and prosperity for all. Read More »
June 23, 2014 | by Dan Piepenbring
The history of the typewriter is, as with the history of the personal computer after it, rife with collaboration, ingenuity, betrayal, setbacks, lucre, acrimony, misguided experimentation, and bickering white men. There are rough analogs for Bill Gates and for Steves Jobs and Wozniak (though there’s no one so delirious and insane as Steve Ballmer)—and one such analog is Christopher Latham Sholes, a Milwaukee printer whose first “type-writer” was patented 146 years ago today.
Sholes is widely credited with having invented the first QWERTY keyboard. It helped to prevent jams and increase typing speeds by putting frequently combined letters farther apart—but that took years of trial and error; the initial iteration of his typewriter was far more rudimentary in design. It looks like a miniature piano crossed with a clock and/or a phonograph and/or a kitchen table—and Sholes did, in fact, design the prototype out of his kitchen table. As you can imagine, it didn’t boast what today’s designers would call “intuitive UX.” Its keys, borrowing from innovations in telegraphy, were arranged as such:
3 5 7 9 N O P Q R S T U V W X Y Z
2 4 6 8 . A B C D E F G H I J K L M
Notice the absence of 0 and 1; Sholes and his cohort assumed that people would make do with I and O. They also couldn’t be bothered with lowercase letters—the first Sholes model was in a condition of eternal caps lock, doomed to permanent shouting. And yet in another sense Sholes was full of intuition and prescience: purportedly, the first letters he typed on the machine were “WWW.” Read More »
June 17, 2014 | by Brian Christian
Living with the Turing test.
As of last week, the Turing test has—allegedly—been passed. In 1950, Alan Turing famously predicted that in the early twenty-first century, computer programs capable of sending and receiving text messages would be able to fool human judges into mistaking them for humans 30 percent of the time, and that we would come to “speak of machines thinking without expecting to be contradicted.” Two weekends ago, at a Turing test competition held at the Royal Society in London, a piece of so-called “chatbot” software called “Eugene Goostman” crossed that mark, fooling ten of the thirty human judges who spoke with it.
The official press release described this as a “milestone in computing history”—a “historic event.” Was it? We should not, of course, take a press release’s word for it. (Said release describes the winning chatbot program as a “supercomputer,” a head-scratching conflation of hardware with software.)
The release says this is the first time a computer program has scored above 30 percent in an “unrestricted” Turing test. This appears to be plausibly true. We don’t have access to the transcripts of these conversations—the organizers declined my request—but we know that the persona adopted by the winning chatbot (“Eugene Goostman”) was that of a thirteen-year-old, non-native-speaking foreigner. The Turing tests of the 1990s were restricted by topics, with the judge’s questions limited to a single domain. Here, the place of those constraints has been taken by restricted fluency: both linguistic and cultural. From correspondence with the contest organizers, I learned that the human judges were themselves chosen to include children and nonnative speakers. So we might fairly argue about what, for a Turing test, truly counts. These questions are deeper than they seem. Read More »
March 26, 2014 | by Dan Piepenbring
Living in fear of 1999’s Melissa virus.
My father died when I was six, and though I didn’t, couldn’t, step into his shoes, I did inherit his role as my family’s IT guy. When I was around eight, I installed Windows 95 on our home computer with no adult assistance. This was a source of enormous pride and stress. I had dreams involving catastrophic software failures, corrupt data, red error boxes, low-res neon-green background screens. I wanted to find something arcane in Windows 95, something mystical. I looked through every file it installed on our computer.
A few years later, at my prodding, we bought an America Online subscription and lurched into the merge lane of the Information Superhighway, where my stress compounded. If I had any doubt that the Internet was a wild, dangerous place, it was dispelled by the bray and hiss of the 56k modem, which seemed to tear into my phone line—implying the abrasion and contusion necessary to connect.
After that, though, came the chipper baritone of the AOL spokesman: “Welcome!” Within the cheery confines of AOL’s walled garden—buddy lists, channels, chat rooms—I felt, as the company wanted me to, safe. I had a screen name. I had a password. Read More »