In an excerpt from his book The New Analog, Damon Krukowski looks at the aesthetics of noise in analog music—and what we’ve lost in the transition to digital recordings.
My favorite records sound the worst, because I’ve played them the most. Each time a needle runs around an LP, it digs a little deeper into the grooves and leaves its trace in the form of surface noise. The information on an LP degrades as it is played—as if your eyes blurred this text, just a bit, each time they ran across it.
Analog sound reproduction is tactile. It is, in part, a function of friction: the needle bounces in the groove, the tape drags across a magnetic head. Friction dissipates energy in the form of sound. Meaning: you hear these media being played. Surface noise and tape hiss are not flaws in analog media but artifacts of their use. Even the best engineering, the finest equipment, the “ideal” listening conditions cannot eliminate them. They are the sound of time, measured by the rotation of a record or reel of tape—not unlike the sounds made by the gears of an analog clock.
In this sense, analog sound media resemble our own bodies. As John Cage observed, we bring noise with us wherever we go:
For certain engineering purposes, it is desirable to have as silent a situation as possible. Such a room is called an anechoic chamber, its six walls made of special material, a room without echoes. I entered one at Harvard University several years ago and heard two sounds, one high and one low. When I described them to the engineer in charge, he informed me that the high one was my nervous system in operation, the low one my blood in circulation. Until I die there will be sounds.
Silence is death, the ACT UP slogan painfully reminded us at the height of the AIDS epidemic in 1987. Why seek it out as a part of our musical experience?
The switch to digital media for music seems obviously disruptive now, but in the mideighties, it was so anodyne my musician friends and I hardly took notice. CDs arrived on the consumer market like any other hi-fi marketing scheme, with promises of cleaner sound, greater durability, and a smaller footprint in your living room—all at a correspondingly deluxe price. For those of us happily wallowing in LPs, it sounded like a pitch designed to part bored businessmen from their money. Let them have their new toy, my friends and I thought. Whenever one of our favorite used-record stores received a flood of LPs from yet another up-to-date person “converting” their collection, we congratulated one another on our good sense and helped ourselves to more mint-minus albums at rapidly falling prices.
Rumors and conspiracy theories about the CD abounded. “There’s no way to permanently bind metal to plastic,” a friend who majored in the sciences told me authoritatively. “They’ll separate like Oreo cookies.” “You know they only cost pennies to make,” said a record store clerk we considered a paranoid hippie because he was a few years older than us. “And if you look directly at the red light in the player, you’ll go blind.” Those who had actually heard a CD play—which wasn’t many in my circle, due to the high bar of buying both a new machine and the expensive individual discs—knowingly said they sounded “cold” or “harsh.” Hi-fi salesmen explained that the dynamic range available to CDs was greater than our cheap stereos could accommodate; you really had to hear them on an entirely upgraded system to appreciate the difference.
So when my bandmate at the time announced he had bought a CD player in order to hear one of our favorite albums—the Feelies’ Crazy Rhythms—“without the scratches,” I received the news with more than a bit of disdain. And then I eagerly asked to hear it, too.
It was true. There were no scratches.
The sensation of first hearing a CD of a recording I had memorized—together with the surface noises on my copy of the LP, and in this case also the (different) surface noises on my bandmate’s copy—was something like driving a late-model car designed for a smooth ride rather than my rusting Fiat 128, which had a hole in the floor and struggled to reach highway speed. Just as in a big new American car, I could no longer feel the surface.
Despite my bandmate’s capitulation and the evident truth underlying at least some of the marketing claims for the format, together we continued to make fun of its high-tech, sci-fi image: small silver discs manufactured in “clean rooms” and played with light. We wrote snide liner notes for the first CD to appear with our own music, a European-only release on a small label from Benelux appropriately named, it seemed, Schemer:
On Saturn they’re only just picking up the signal. And the bartender says: these guys have a sound.
The sound is today. A few light years away, but it’s still now. Flying out of the mystery and back in your life. By laser beam.
Although we had agonized over every aspect of our first LP, this first CD we treated flippantly—improvising the silly liner notes together on a typewriter, blithely adding a “bonus track” that had previously been hotly debated and dropped from the LP. We were like those Hollywood stars who protected their image fiercely in the American media but consented to embarrassing commercial endorsements in Japan. A CD felt so remote to our lives in music it might as well have been intended not just for overseas but for another planet, as we teased in the liner notes.
The joke was on us, obviously. Precisely what seemed most absurd to us at first about CDs—that nothing need touch them as they played—is what made them truly different from LPs and what ultimately ended the musical era we had grown up in. “Digital” was Orwellian in its misdirection: these were objects nobody handled. By contrast, we put our fingers all over LPs. A friend who owns a record store tells me some collectors even lick them.
If you listen closely enough to an analog recording, you hear all its sounds preserved together: the signal and the noise.
This intangibility of digital music has a precedent from the earliest days of sound recording.
Before the Victrola and the radio, music in the home meant instruments in the parlor. (Parlor guitar remains the term for a small-bodied acoustic.) The piano was—still is—the grandest, most expensive, and least portable parlor instrument of all. In the United States, the post–Civil War economic boom was marked by a flood of pianos. To this day, many occupy a central place in older American homes, whether or not anyone plays them or even wants them; the website PianoAdoption.com maintains a list of those available for free to anyone willing to move one. (It was developed by a very clever piano mover from Nashua, New Hampshire.)
All those pianos needed sheet music. As early as the 1830s, Boston composer and churchman Lowell Mason (his settings of hymns are still familiar to many Americans) advocated that music be taught in the newly developed public-school system. By the time Lowell’s son Henry started manufacturing Mason & Hamlin pianos in the 1880s, “America had become the most musically literate nation on earth,” according to the Center for Popular Music. In the Gilded Age, music publishers were as formidable a presence for U.S. intellectual property as piano manufacturers were for the industrial economy.
Then in 1898, a disruptive digital invention pitted one against the other: the pianola or, as it came to be known generically, the player piano.
The player piano dispensed with the need for sheet music in favor of a piano roll directing air-powered levers. The piano roll is a preelectronic digital technology—like the Jacquard loom, it uses punches in paper for “on” and “off” binary instructions. The first device to make use of this technology, the Aeolian Company’s pianola, proved so popular that by the 1920s half the pianos sold in the country had incorporated it. Even Steinway was making player pianos.
While piano manufacturers might benefit from this new technology, sheet-music publishers could not. The technology of the piano roll belonged exclusively to its makers. And by 1902, only four years after the launch of the pianola, they were selling more than a million of them.
So the sheet-music publishers did what any software company would do when a hardware manufacturer threatens to make its product obsolete: they sued. The publishers argued that the digital piano roll violated their copyrights by reproducing the music they printed, even if it didn’t make use of their product to do so. Their case went all the way to the U.S. Supreme Court. And they lost.
In 1908, the Supreme Court ruled in favor of a Chicago manufacturer of player pianos and piano rolls and against a Boston music publisher who had sued over use of the songs “Little Cotton Dolly” and “Kentucky Babe.” The Court reasoned, in WhiteSmith Music Pub. Co. v. Apollo Co., that music is not a “tangible thing”: “In no sense can musical sounds which reach us through the sense of hearing be said to be copies,” wrote Justice William R. Day for the majority, reasoning that they were therefore not subject to copyright.
A musical composition is an intellectual creation which first exists in the mind of the composer; he may play it for the first time upon an instrument. It is not susceptible of being copied until it has been put in a form which others can see and read.
Piano rolls can be seen, to be sure, and it might even be said that they can be read—but not as music, or at least not by a person. Therefore the Apollo Co. could continue to manufacture piano rolls of “Little Cotton Dolly” and “Kentucky Babe” with impunity, said the Court, since “these perforated rolls are parts of a machine.”
The Court went on to note that this same reasoning would apply to another recent invention: the wax-cylinder recording. Here Justice Day approvingly cites language already used by the Court of Appeals:
It is not pretended that the marks upon the wax cylinders can be made out by the eye or that they can be utilized in any other way than as parts of the mechanism of the phonograph. Conveying no meaning, then, to the eye of even an expert musician, and wholly incapable of use save in and as a part of a machine specially adapted to make them give up the records which they contain, these prepared wax cylinders can neither substitute the copyrighted sheets of music nor serve any purpose which is within their scope.
The decision left music publishers empty-handed, as it were. Sound wasn’t a tangible thing, and their copyrights were tactile only. The manufacturers of the new pianolas and Victrolas owned the patents to all the mechanical parts of their devices, and if those parts emanated music, that was their business.
This didn’t go over well in Congress. The next year, it rewrote copyright law to supersede the Supreme Court’s ruling. Looking to rescue music publishers from the Napster-like chaos of royalty-free piano rolls, yet allow the player piano industry to continue manufacturing without being hamstrung by intellectual-property owners, the Copyright Act of 1909 established a system of compulsory mechanical licenses. Mechanical reproduction of music (i.e., piano rolls, gramophone records) could continue without permission of the music publishers, so long as those publishers were paid a statutory royalty for each “mechanical reproduction” derived from use of their music. (Songwriting royalties are still calculated this way and are known as “mechanical royalties.”)
However, Congress declined in 1909 to redefine what constituted music, allowing it to remain in the eyes of the law an intangible thing. Surprising as it may seem in retrospect, “musical sounds which reach us through the sense of hearing”—recordings— remained outside U.S. federal copyright protection until February 15, 1972. Which explains the twentieth-century music industry’s focus on the “label”—a tangible and therefore copyrightable object that took on such outsize legal importance it became a metonymy for the record company itself. Since sound could not carry copyright, the © ownership symbol on record labels and sleeves applied only to what was printed on them: logos, artwork, liner notes.
In 1972, the U.S. law was amended to allow for copyright of sound recordings and a separate ownership symbol was established because © hadn’t previously applied: Ⓟ, for phonogram.
Let’s return for a moment to the anechoic chamber with John Cage. Cage entered that room to experience silence—what audio engineers now call digital black, the absence of both signal and noise. But his own living body, he discovered, emitted sounds in time: the sounds of his nervous system in operation, and his blood in circulation. “One need not fear for the future of music,” concluded Cage—because what is music but sounds in time? Silence is beyond our corporeal experience, since living bodies occupy not only space (the anechoic chamber) but time (John Cage in the anechoic chamber). We can imagine and create the conditions for noncorporeal sound, but we cannot experience it because we hear in time. Our ears are as sticky as our fingers. And what they stick to is time.
This is what makes the Supreme Court’s decision of 1908 intuitively wrong, regardless of one’s legal judgment. Sound in the abstract may not be a “tangible thing,” as the Court asserted, but sounds in time are. The invention of audio recording made this clear to people immediately. “Canned music”—John Philip Sousa’s term for recordings when they first appeared—is music stored for the future. It is bottled time.
As Jonathan Sterne details in his history of early recording, the invention of canned music was not unrelated to a contemporary fascination with embalming. The Victorians were death obsessed and saw sound recording as another means of preservation: “Death and the invocations of ‘voices of the dead’ were everywhere in writings about sound recording in the late nineteenth and early twentieth centuries,” Sterne writes. He points out that even Nipper, famous mascot and logo for HMV (“His Master’s Voice”), is based on a painting of a dog listening to a gramophone that many assumed was placed on top of a coffin.
Nipper responds to the recording of his late master’s voice because the sounds it reproduces are tangible in time. It’s just that time has been displaced.
The Splendid Splice
The invention of magnetic audiotape in the 1940s made this displacement of time literally more plastic. While wax cylinders and gramophone records could preserve a solid slab of time, tape could be cut into pieces of time and rearranged. Glenn Gould called this “the splendid splice,” because it allowed him to perfect a recorded performance by picking and choosing among parts of different takes. A razor blade and some sticky tape was all it took to join one moment in time to another.
Experimental composers quickly pushed this plasticity to an extreme in the pursuit of abstraction. “Musique concrète,” as formulated by French composer and theorist Pierre Schaeffer, used the splice to sever the “sound object” from its source (an instrument or a field recording location), which might then be rendered unrecognizable by abbreviation or other manipulation. John Cage used the splice to reorder sounds according to chance operations—although the immense labor required to turn the 192-page score of his first tape piece, Williams Mix (1952), into the resulting four-and-a-half minutes of music dissuaded him from pursuing the technique much further. Each page of Cage’s score, which specifies multiple splices in two “systems” of eight tracks of tape each, sums to just one and one-third seconds of playback.
One might assume that the dense number of splices in a work like Williams Mix would lead to nothing but a blur of undifferentiated noise. Yet even in such an extreme work, where more than five hundred source sounds have been cut and shuffled in fine detail, there is an unmistakable recognition of sounds in time. Our ears catch extraordinarily small moments as they rush by, whether in recorded music or in the world.
Audio engineers have tested the limits of this perception by looking for the shortest duration of sound we can recognize as a note. The answer is a hundred milliseconds. In Microsound, Curtis Roads reports that in even less than that amount of time our ears can still perceive “discrete events … down to durations as short as 1 ms.” Those are heard as clicks—but clicks with “amplitude, timbre, and spatial position,” which can therefore be distinguished from one another.
A millisecond, in case you aren’t familiar with that end of the timescale, is one-thousandth of a second. Imagine Cage’s score for Williams Mix stretching to 192,000 pages for the same four-and-a-half minutes of sound. No analog work could begin to approach that level of detail.
Or we might say: no analog work can exceed our powers of perception for time.
The Foothills of the Headlands
In popular music, tape manipulation pushed toward a different set of conclusions, more superor surreal than abstract. Even before the advent of multitrack machines, artists and audio engineers realized they could “bounce” between two tape decks, overdubbing additional sounds on top of what had previously been recorded. Four-track tape made this process flexible and efficient enough for the Beatles to record their psychedelic masterpieces Revolver and Sgt. Pepper’s Lonely Hearts Club Band. After filling all available space, Abbey Road engineers would make a “reduction mix” to a single track (either on the same tape or to one on a second machine), and continue adding on top.
Overdubs make different use of the time embodied on magnetic tape than a splice. While a splice joins one discrete moment to another, overdubs layer multiple moments atop one another to make a super-real environment—one in which string orchestras and backward guitars move together through the same space of time, on a single piece of tape unspooling at fifteen inches per second.
Listeners to these imagined soundscapes seized on their hyperreality rather than their impossibility. “Lucy in the Sky with Diamonds” is an archetypal song of the era, perhaps not only for the implicit drug reference (which singer/songwriter John Lennon always denied) but because it describes what it’s like to hear a multitrack recording. “Picture yourself in a boat on a river,” it begins, as you might have done while listening to Debussy. But it then adds an unforeseen layer of color: “with tangerine trees and marmalade skies.” As you adjust to this synesthesia and begin to focus on “a girl with kaleidoscope eyes,” Lennon’s voice suddenly moves much farther away, singing:
Cellophane flowers of yellow and green, Towering over your head.
Evidently, it may be you who has moved by growing very small; Lennon’s voice might well have remained where it was. But where does that put the girl we were just getting to know?
Look for the girl with the sun in her eyes, And she’s gone.
Boom—boom—boom. Not only the girl but everything in the soundscape disappears, cleared away for a chorus that emerges in an entirely different space again. A space you enter, too, as inevitably as one moment follows another.
John Lennon pulls us through the shifting perspectives of “Lucy in the Sky with Diamonds” as if guiding us through the multiple layers of time and space the Beatles added to their multitrack tapes. Like John Cage’s 192-page score for four and a half minutes of music, each of the brief pop songs on Sgt. Pepper’s represents hundreds of hours of labor. But rather than compressing that time by cutting it up as Cage had, the Beatles layered over and over the same length of tape, until it was so thick with time that listening to it reminded people of an acid trip.
Just as there is a physical limit to the number of splices that might occupy a given length of tape—a limit John Cage seemed to approach on his very first pass, in Williams Mix—there is a limit to the number of overdubs possible in an analog medium. Tape itself is not silent as it moves through a recording machine; no more than we are in an anechoic chamber. Which means each overdub adds not only more signal but more noise, in the form of tape hiss. And layers of hiss don’t get more trippy, they just get louder.
One reason the great works of multitrack analog recording were made by artists with tremendous resources—the Beatles, the Beach Boys—is that it took the finest analog equipment to keep tape hiss at a minimum for that many passes through the machines. “Lo-fi” artists have made equally dense and psychedelic works; the Elephant 6 collective of the 1990s were still in high school when they started making theirs, on four-track cassette. But in analog recording, overdubs and tape hiss necessarily go hand in hand; only capital (or Capitol) can keep the latter manageable as the former pile up.
Even so, Sgt. Pepper’s and Pet Sounds are works of noise as well as signal. Those noises are not limited to tape hiss—they include all the many aural artifacts of the various times and spaces layered onto these short lengths of tape. A well-known example on Sgt. Pepper’s is the studio air-conditioning audible at the end of the album’s dramatic final chord. And obsessives have made use of Internet crowdsourcing to catalog all the many noises married to the signals on Beach Boys recordings. Here is the list just for the song “Here Today,” from Pet Sounds:
1:15 Mike starts singing the chorus too soon. “She made me feel” and then someone else says something to make Mike stop
1:27 Something metallic is dropped at the point: “She made my heart feel sad. Sh(drop)e made my days go wrong . . .” of the second chorus.
1:46 Brian says “Top” as soon as the second chorus ends to rewind the tapes and start the take over
1:52 Someone says something supposedly about cameras
1:56 Someone else replies to the person at 1:52
2:03 Brian says “Top please,” probably because he realizes the tape is still rolling after all these noises
These inadvertent noises are inseparable from the intended signals on the tape. Had Brian Wilson wanted to get rid of them, his only option would have been to rerecord the entire track on which they occurred. Had that track already been bounced along with others in a reduction mix, it would mean rerecording all those tracks, too. And had the unintended sounds gone undetected until the final mix, as often happens, it would mean throwing away the complete recording and starting all over again.
Analog recording is an additive process. Whatever happened in the studio as each layer was added, happens again on the tape as it unspools. For all the Abbey Road engineers’ ingenuity—which was truly remarkable, they seem to have utilized or invented most every analog studio recording technique—they could not remove the air-conditioning at the end of “A Day in the Life” without removing the dying piano chord as well.
At the other end of that additive process is the close listener. If you listen closely enough to an analog recording, you hear all its sounds preserved together: the signal and the noise.
When the catalogers of unintended noises listen to Beach Boys records, they listen between the notes. We might call it thick listening, alert to the depth of the many layers in multitrack recording. They listen through the surface noise of the LP, through the hiss of the master tape, through the layers of the music itself all the way back to the room in which it was played, where two horn players are standing and chatting.
In other words, they are listening to more than the signal of the music—they are listening to the signal framed and enriched by noise.
Do digital formats reward this kind of attention? Our developing habits would seem to indicate otherwise.
In iTunes, I keep a folder of music I have access to only in digital format—mostly bootlegs found on the Internet and promo copies sent to me as downloads. I don’t think of it as a large part of my music library, since I own more records and CDs than are reasonable for apartment living. Yet iTunes calculates that it would take me five days, fifteen hours, fifty-one minutes, and five seconds to listen to it all.
Will I ever?
Frictionless digital music—those sounds we cannot touch—is distributed and stored without friction as well. Apple’s iPod classic was touted for its ability to hold up to 40,000 songs. For scale, the Beatles wrote a total of 237 songs.
It is normal, with today’s digital media and devices, to have access to far more music than one can ever hear. The time it takes to listen to music is now in shorter supply than recordings. Digital music has created a time deficit.
Which means that even in my relatively small folder of digital bootlegs and promos, many will likely go unheard. More to the point: most will never be listened to closely. Close listening is a function of time. It starts at the beginning, takes in each note and the spaces between, and stops at the end.
Does that describe our digital listening habits? I for one find myself clicking through a good deal of digital music. If it’s online or on my computer, I skip around—I preview tracks, hearing a bit here, a bit there. My digital listening is to signal alone. I hear the notes but not the space between, or the depth below. It’s listening to the surface without the noise.
This essay is adapted from The New Analog: Listening and Reconnecting in a Digital World, available now from New Press. Reprinted with permission.
Damon Krukowski was in the indie rock band Galaxie 500 and is currently one half of the folk-rock duo Damon & Naomi. He has written for Pitchfork, Artforum, Bookforum, Frieze, The Wire, and on his blog International Sad Hits. He has published two books of prose poetry, is a copublisher of the literary press Exact Change, and is the author of The New Analog. He has taught writing and music at Harvard University and lives in Cambridge, Massachusetts.