TREVOR PAGLEN: A Study of Invisible Images
METRO PICTURES | SEPTEMBER 8 – OCTOBER 21, 2017
“The invisible world of images isn’t simply an alternative taxonomy of visuality. It is an active, cunning, exercise of power, one ideally suited to molecular police and market operations."
Throughout his career, Trevor Paglen has made artwork out of the “invisible.” An expert in clandestine military installations, Paglen has trained his eye on places and programs that, officially, do not exist—from military black sites to NSA headquarters, drone surveillance to the CIA’s abduction outfits. With an MFA and a PhD in geography, he pioneered and practices “experimental geography”—an interdisciplinary field that uses art-making to critically map and navigate geographies of power. In his recent exhibition, A Study of Invisible Images, Paglen surveys the territory of “machine vision” to capture images made by machines for other machines. The work exposes a system of looking that, though startlingly pervasive, is largely imperceptible to human eyes.
This exhibition doesn’t provide wall text, instead, it leans heavily on Paglen’s extensive checklist notes and essays, as well as the multiple walkthroughs and lectures that took place throughout September. The supplementary guidance is necessary as, at first blush, the gleaming prints are laconic, cold and easily dismissed. This may be why the show has received such little critical review, aside from Jerry Saltz’s flustered denouncement of Paglen’s “Conceptual Zombie Formalism” based on “smarty-pants jargon.” Indeed, the opening room’s slick prints of dated government portraits and generic sunsets will likely bore the unaided or hurried guest—I saw more than one listless face—but banality and repetition, alongside patience and curiosity, seem to be the first steps to comprehending the machinic gaze of artificial intelligence algorithms.
“If we want to understand the invisible world of machine-machine visual culture,” Paglen writes in an essay for The New Inquiry, “we need to unlearn how to see like humans.” As a recent artist-in-residence at Stanford University, he has worked tirelessly towards this end. In collaboration with computer scientists, Paglen custom-built software programs that render invisible images—images made mostly of datasets and algorithms—visible to the human eye. The resultant prints and video on view illustrate what an algorithm is “seeing” as it is trained, i.e., what it “perceives” when mapping a landscape, face and even idea. Most of the images in the exhibition, to be clear, are really just a collection of datasets.
As explained in his notes, Paglen introduces machine learning through the ABCs of object and facial recognition where algorithms look for defining “keypoints” (areas of interest) within the data pool or “training sets” of tens of thousands of images categorized under a particular theme. In Four Clouds (2017), a quadriptych of sanguine skyscape prints, Paglen’s software program traces keypoints as faint pareidolic splinters, radii and strokes—the footprints of an AI struggling to extract meaning from empty skies. What we see is an AI who looks to be having Cubist daydreams, but what’s chilling is that this is the same object-recognition technology used in guiding missiles and surveillance systems. In It Began as a Military Experiment (2017)—a set of 10 high-contrast, printed portraits of government employees—facial keypoints materialize as alphabetic marks mapping the corners of a mouth or the tip of a nose. Paglen curated this selection of faces from FERET (Facial Recognition Technology), the original facial recognition database made up of thousands of military employees photographed between 1993 and 1996. Such databases, made specifically for machine eyes, train AIs to recognize and cross-reference faces. Averaged against an individual’s features, the technology can create ghostly “face-prints,” or “eigenfaces,” that act like a thumbprint. This technology is the basis of all facial recognition programs used to identify you in the street, on Facebook, and even curate preferred face types on Tinder.
In one portrait, Paglen translates the eigenface method, which is ordinarily just an invisible mathematical abstraction, into a portrait of Frantz Fanon—philosopher, psychiatrist, and revolutionary, author of Black Skin, White Masks (1952) and The Wretched of the Earth (1963). Like most eigenfaces, "Fanon" (Even the Dead Are Not Safe) (2017) mirrors the cold regard of the machinic gaze. Absent emotion or character, the portrait is technically “Fanon” yet lacks the defiant, analytical demeanor that defined him; he has has become the colonized subject he once sought to free, the “zombie…more terrifying than colonists.” Nearby, in the large print Machine Readable Hito (2017), Paglen has subjected hundreds of portraits of Hito Steyerl, the influential artist and author of 2013’s The Wretched of the Screen, an obvious homage to Fanon, to facial-analysis algorithms. Steyerl makes different faces in each portrait while an algorithm largely fails to predict her age, gender and emotion (she interrupts a few of these outputs by covering her face, a likely gesture to her own artwork). Between Fanon’s phantom and the hundred plus Hitos, Paglen frames the crucial shift in neo-colonial ground. Moving from Earth to Screen, the new territory of domination is virtual or, as Steyerl argued in her perspective shifting essay Free Fall (2011), “groundless”—a state in which “traditional modes of seeing and feeling are shattered.”
In the next room’s video installation, Behold These Glorious Times! (2017), we get to see the rapid-fire training sets of images digested by an AI that must, like humans, learn to recognize and conjecture. The imagery—thousands of aligned faces, hand gestures, emotions and trivial objects—teaches the baby AI how to identify specific categories. Through a structure of buzzing black-and-white grids, Paglen also shows us what an AI actually sees during its schooling—how it dismantles images into hundreds of parts in order to decipher them. In the university lab, an artificial intelligence algorithm may be taught how to autonomously “see” a cat or banana using this method, but in the real world it learns to see and identify license plates, persons, bodies and buildings. Paglen warns that such machines feed the forms of power they are designed to exercise—the market, military and police.
Much of Paglen’s work in the final gallery is an application of Google’s developments in pareidolic computer vision, what researchers call “Deep Dreaming,” and marks a turn in the exhibition. Feeling more oneiric than algorithmic, the “Hallucination” series is the most captivating employment of artificial intelligence in Paglen’s research, as well as the most aesthetically and conceptually robust. These images are created by AIs trained on sets of “irrational things” –thematic corpuses—chosen by Paglen such as “Interpretation of Dreams” or “Omens and Portents.” Paired with a drawing AI, the unorthodox curricula make the trained AIs “hallucinate”—they perceive and then recognize something that isn’t there. An AI taught to only “see” Freudian dream symbology, for instance, envisions a formalist architecture of gum flesh and bone (False Teeth, 2017); another is restricted to signs and wonders that render the saccharine smog of Rainbow (2017). By this bizarre method, Paglen creates haunting, astonishingly sensitive prints.
I repeatedly caught myself inferring artistic technique into a hallucination that wasn’t a painting and was not made by a human. These images are “synthetic,” internally conjured visions, mere shadows of an archetype the AI is trained to locate. Nonetheless, they seem familiar, at times painterly and even referential. Trained in a different set, I couldn’t help but see Bacon, Richter, Turner and Dumas. Especially in the corpus, “The Humans” whose hallucinations—drawn from data sets that include eyes, pornography, cocaine and licking—are particularly luscious, if a bit jagged, sweeps of Baconian hips (Porn) and simulated chiaroscuro (A Man). Paglen’s AIs also turn introspective in these hallucinations. The Futurist shuffle of Octopus—an anomalous animal who evolved a camera-type eye independent of our own—reminds us of the biological precedent to machinic vision. In the corpus “Eye-Machines,” AIs are trained to see post-human landscapes like factories or war, dominated by artificial intelligence. The result is A Prison Without Guards (2017), an empty corridor of staggered gray fields that serves as both self-portrait and premonition.
Highway of Death (Corpus: The Aftermath of the First Smart War) Adversarially Evolved Hallucination, 2017, dye sublimation metal print, 32 x 40 inches. Edition 1 of 5. Courtesy of the artist and Metro Pictures, New York.
Given the context of their genesis, the beauty in Paglen’s images is unsettling. Like Fanon’s face-print, these are the condensations of ideas, power, histories and futures, conjured as “ghosts.” Yet in the visual collapse of an entire corpus, these phantom images move beyond the uncanny or creepy to bridge poignant connections between machine and man. Take for instance Highway of Death (2017), a hallucination trained on the apocalyptic wake of the Gulf War. Desert Storm was the first “smart” war to fully implement stealth technologies such as GPS, advanced surveillance and laser-guided bombs—the predecessors of drone warfare and Google Maps. Images of Kuwaiti oil fires, desertification and birth defects informed this AI’s vision: a bleak landscape of desert, oil and crimson that could stand in for any number of contemporary battlefields. Suspended between immediate recognition and fantasy, the work is reminiscent of Werner Herzog’s own science-fictive documentation of the ravaged post-war oil fields, Lessons of Darkness (1992)—an apt title for many of these deep fever dreams. Similarly, hallucinations from the “Monsters of Capitalism” corpus were raised on allegorical “monsters” of industry, such as Mary Shelly’s Frankenstein and zombie formalism, to create the unnerving Marxian mask, Vampire (2017). The hovering red teeth of Venus Flytrap (2017) fed on thousands of images of “American Predators” that included killer drones and Mark Zuckerberg—a corpus that clearly reveals the closely-knit tapestry of big corporations and covert programs that fund and propel AI technology.
This new development in vision, which Paglen finds “more significant than the invention of photography,” is morally shortsighted and cannot be entrusted to silicone valley and military generals alone. The categories and training sets they employ, under the flagrant banner of machine-machine objectivity no less, only embolden the instruments of capital and power they serve. From Fallujah to Kabul, Facebook’s “DeepMask” to Apple’s latest Face ID, these technologies are being used in military and market targets alike. “We must begin to understand these changes,” Paglen insists, “if we are to challenge the exceptional forms of power flowing through the invisible visual culture that we find ourselves enmeshed within.” It is crucial to not only recognize how these advances are transforming our world, but how they are now recognizing you.