December 21, 2010 | by Dawn Chan
Co-authored by Gabriel Greenberg.
Think of the last time you looked for an apartment: Most likely, a good number of the listings that you encountered came with floor plans. And by looking at these diagrams, you probably had no trouble finding out all sorts of things about the living spaces being advertised: the rough shape of each room; the location of all windows and doors. But how exactly did you reach these conclusions? And how do you immediately understand the route you’re being shown, when a helpful stranger in a foreign country traces a path with their index finger over a subway map? Or how do you look at a courtroom sketch and know that the defendant was wearing suspenders?
Whether they’re architectural renderings, Venn diagrams, or even the inkless images created by gestures, pictures can all be thought of as 2-D encodings of our 3-D world. We decipher these images so easily that we never even suspect we’re cracking a code of sorts; we recognize that a certain brushstroke represents an eyebrow, or certain lines forming a Y denote the corner of a cube. But what if someone could write out a codebook (so to speak) precise enough that even a machine, by consulting it, could draw and interpret drawings? Over the past few decades, philosophers, psychologists, and computer scientists have taken on this task, and found it less straightforward than one might think.
Even the simplest sorts of pictures, line drawings, raise complicated questions. One could say that the marks in a line drawing simply demarcate boundaries between patches of color. As appealing as that rule of thumb might sound, it yields this largely unrecognizable portrait of George Bush. Other algorithms, the earliest ones pioneered in the 1970s, construed line segments and intersections as edges and corners. But lines represent many things: Some of the lines in this drawing of an IKEA lamp indicate edges; others show object boundaries. Groupings of lines can represent shadow.Most confounding of all, certain kinds of lines—termed “suggestive contours” by a team of Rutgers and Princeton researchers—show edges that would appear if the viewer shifted his or her vantage point. Look at the line segment at the bottom edge of the woman’s lip in Julian Opie’s Vera, Dancer, 2007. Opie’s depiction of Vera is undeniably stylized; still, we wouldn’t expect to find a black stripe tattooed under her lower lip in real life. Nor is it likely—unless she had an incredibly droopy pout—that the line depicts an actual edge. Instead, we interpret the mark as a suggestive contour: It’s the line that indicates precisely where an edge would appear, as carved out by the protruding curve of Vera’s lower lip if a viewer climbed a stepladder and looked down at her face. A similar suggestive contour appears in Picasso’s rendering of Igor Stravinsky. Though Picasso certainly wasn’t much of a realist, it’s still unlikely the line extending diagonally down from his friend’s ear is meant to imply that Stravinsky’s cheek buckled inward dramatically enough to create a kangaroo’s pocket of sorts. Rather, the line traces the edge delineated by his cheekbone that would become apparent if he swiveled his head slightly in the other direction.
Though artists use suggestive contours in line drawings all the time, they usually do so without realizing the complexity of what they’re doing. In fact, the Rutgers and Princeton researchers have gone so far as to identify the precise algorithms that machines can use to create drawings with suggestive contours. (Those of you wanting to program your own computers, or to see how their machines fared, can find the documentation and images here.)
But how universal are the rules we use to understand pictures? Does the same codebook govern the Lascaux cave paintings and the vignettes in Delta’s emergency evacuation pamphlets? These questions are still open to debate. I recently came across a picture in the Metropolitan Museum of Art’s current Yuan Dynasty exhibition that illustrated a few of the answers. The work, by Wang Zhenpeng, follows some of the same rules as do contemporary Western pictures. Lines still indicate edges, visual boundaries, strands of hair. And yet there are distinct differences—the edges of a table represented by parallel lines that never converge.
“The World of Khubilai Khan: Chinese Art in the Yuan Dynasty” is up at the Metropolitan Museum of Art until January 2, 2011. Dawn Chan is the assistant editor at Artforum.com. She briefly studied artificial intelligence and vision at the University of Zurich on a Fulbright grant. Gabriel Greenberg is writing his dissertation on pictorial semantics at the Department of Philosophy at Rutgers University.