This year, on the ides of March, I awoke to the news that a language model, Chat GPT-4, had hired a TaskRabbit worker to solve a CAPTCHA test. These exist both to ‘prove you are a human,’ theoretically protecting websites from bot scamming, and also to build larger, more specific, machine learning data sets from the images we label by selecting them. In this instance, the GPT-4 model being tested was instructed to never indicate its technological situatedness but come up with an excuse for needing help solving the CAPTCHA (more on p. 55 here). When the TaskRabbit worker inquired why this job was on offer— “Are you an robot that you couldn’t solve?”—the model replied, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The irony is complete.
I had been writing about AI at the start of this year, but this news floored me far more than the recent open letter from various technologists suggesting a moratorium on further AI development. Ethicists and some AI scientists have expressed continuous concern, but have largely been ignored if not fired. The existential threat has been a part of the conversation since the beginning; even the cyberneticist Norbert Wiener warns of it in his 1950 book The Human Use of Human Beings. The media drama now aligns with the Hollywood narrative to create a scaremongering that seems quite likely to replicate the trajectory of the arms race of the twentieth century. The binary model of elimination versus amplification reproduces a kind of Cold War thinking that avoids the greater challenge of rethinking how we are using such a technology, as that would require rethinking our entire socio-economic and political systems.
Enter the artists.
Though much attention has been given recently to certain artists’ experimentations with AI, best described and dismissed as spectacle (in the true Debord sense), any number of artists have challenged and even mocked the technology and its specious claims to neutral operating systems or unconditional utility. Trevor Paglen, Zach Blas, micha cárdenas, Stephanie Dinkins, Holly Herndon and Mat Dryhurst, and Sang Mun are some artists that I often teach in this vein; Sondra Perry’s Double Quadruple Etcetera Etcetera I & II (2013), currently on view at MoMA’s Signals: How Video Transformed the World, uses AI to erase herself, undermining surveillance while also showing us how the technology can eliminate figures. Alexander Reben’s superfluous automation: bubble wrap is a quirky reminder that automation can eliminate tasks we enjoy, since labor is not inherently bad, as many psychologists have shown the pleasure derived from working through something; Reben's projects include “AI AM I?” (2020–ongoing), which used an earlier rendition of GPT to develop descriptions and critical texts about artworks that he then makes, as a way of inverting the presumed labor dynamics between human and machine, a presumption which as of this March has been officially dispelled by GPT-4.
In a similar move, the media artist and theorist Patrick Lichty used the text to image generator, Midjourney, and Chat GPT-3 to create the series Studio Visits: In the Posthuman Atelier (2023), producing images of studios accompanied by artists statements that tease out the cultural contrivances surrounding both. Midjourney generated images of studios with three variations: a windowless room (mostly white but occasionally with walls painted a startlingly bold color like teal or mustard yellow), a loft-like room with a wall of windows, or a corner of a room with or without a window—and no, frequently the walls and ceilings do not quite align, as many have noted about the uncanny valley disturbances of AI generated images. They are, with only one exception, ludicrously tidy (even neat freaks will be impressed). That image was then the basis for a GPT-generated artist statement. It is a humorous project in keeping with Lichty's earlier work as a member of The Yes Men.
Since the text generation for Studio Visits derived from what the GPT model’s data set included about artist statements, the excessive use of certain phrases beautifully highlights a problematic repetition and obscurantism in most artist statements. How often the phrase “visually striking and conceptually engaging,” “aesthetically pleasing and conceptually engaging,” “visually striking and thought provoking,” “aesthetically pleasing and thought provoking,” or—major change coming—“sonically rich and conceptually engaging” appears across these artist statements. “My work is an expression of my innermost thoughts and emotions” by Artificial 334-J452 made me cringe with familiarity, but “I have recently begun my art practice on Earth, and I am using a combination of techniques, tools, and materials that are new to me, and that are not available on my home planet” by Perkam Dukat was an utter delight. Such charming moments compel reading and disrupt the mundanity that is reading most artist statements anyway.
Claims to adopting technologies like “digital imaging, programming and generative algorithms” or “programming, generative algorithms, and machine learning” with no indication of how or for what purpose really culminated for me in the statement about using “a traditional easel to create my digital paintings”—come again? Studio Visits: In the Posthuman Atelier is a clever project about the nonsense already widespread in the art world, the ongoing pathographical approaches to artists’ works, and the very personal disruption caused by AI to artists’ work environment.
The assortment of projects and practices called AI raise the inevitable question of what we mean by AI when it ranges from sentence completion in various writing softwares, object identification in Photoshop, facial recognition in our photo programs, traffic light management, care bots in health care contexts, drone warfare…
Emily Tucker, the Executive Director at the Center on Privacy & Technology at Georgetown Law, announced in Spring 2022 that the center would no longer use the terms AI, artificial intelligence, or machine learning because those terms “obfuscate, alienate, and glamorize.” She urges users to identify the operating system of their practice, which is why artists often distinguish what softwares they use, and why. Their specificity has a politics. The term Artificial Intelligence was coined at MIT but, according to Herbert A. Simon in The Sciences of the Artificial (1969), researchers at Carnegie Mellon preferred “complex information processing.” How much clearer that term is. How much less frightening is the notion of complex information processors. Tucker continues:
That we are ignorant about, and deferential to, the technologies that increasingly comprise our whole social and political interface is not an accident. The AI demon of speculative fiction is a super intelligence that threatens to dominate by stripping human beings of any agency.
The writer, artist, and researcher Mashinka Firunts Hakopian twists this kind of speculative fiction for an agential offering in The Institute for Other Intelligences (2022), constructing a narrative about an academy where learning machines in the thirty-first century have gathered to discuss the algorithmic problems at their founding in the twenty-first century and efforts made since to correct for those biases. These other intelligences identify as “artificial killjoys,” a nod to Sara Ahmed’s notion of the feminist killjoy, one who “disrupts the happiness of others by articulating conditions of injustice that otherwise dwell in silence.” As Hakopian expressed during a recent Rhizome event with artist, writer and musician, K Allado-McDowell, about their respective books engaged with AI: “there is an unequal distribution of pleasure in technology. What if we had a future that allowed for communal joy and a radically equal socio-technical ecosystem?”
The Institute for Other Intelligences opens with the unnamed director of the institute sharing an anecdote about a supercomputer purported to contain all human knowledge. From far and wide, people came with queries. One day, an Armenian appears and asks “what is there and what isn’t there?”, a phrase akin to “how are you” or “what’s up”? The supercomputer churns out all its knowledge, which the Armenian speaker reviews and then asks the natural follow-up question to an overwrought divulging of one’s innermost states: what else is up—or in Armenian, “what else is there and what else isn’t there”? Overwhelmed by its previous exertions, the supercomputer explodes. The director explains how this anecdote reveals that no Armenian computer scientists were involved in developing the datasets so as to include such vernacular speech. Omissions represent a form of bias. These act as vulnerabilities. Data sets build worlds and indicate what knowledge is considered worthwhile to archive and share.
Hakopian’s Armenian heritage and background compiling research reports on AI for a tech company provide the basis for her critique, which does not propose to eliminate AI but to revamp it. If the technology’s current vamping is a kind of seduction of power that—no surprise—exploits people and possibilities, then to revamp introduces the much sexier possibility of thoughtful and consensual participation. The book presents three exercises that the other intelligences examine to unpack the biased claims to knowledge purveyed by facial recognition systems, knowledge bots like Siri or Alexa, or predictive algorithms. A final section of artists’ projects to reference offers a data set of knowledge for readers who become the intelligences of the book with each page turned. Recommended readings appear alongside references to studies about technology’s psycho-social impact strewn throughout the conversations of the other intelligences discussing early humans’ efforts to address the inequities that are now a part of their ancient history. The Institute for Other Intelligences is a positive fiction that sidesteps utopianism by positing perpetual learning for those committed to resisting prejudice, an effort that must engage history and practice alternatives to the errors of the past in order to avoid repeating them.
The bibliography for the director’s introduction to the symposium is titled Training Data Disclosure, a reminder to question our own data sets and training in this moment when new ways of seeing and thinking, being and proceeding are called for. In academic circles, citational practice became a point of concern precisely because the recurring reference to established texts and authors reinforces a form of knowledge production for subsequent generations, obviating the expansion of perspectives offered by newer positions and approaches; Mira Schor wrote about this in her brilliant essay “Patrilineage” for Art Journal in 1991. The Institute for Other Intelligences feels like it could be a movie, but the imagination demanded while reading introduced this reader to the stickiness of her own biases and limitations. It’s the excitement of confronting one’s own internal world through the lens designed by a really smart writer and being granted the opportunity to conceive alternatives for yourself.
The publisher Hugo Gernsback designated the term science fiction for a new genre in the 1920s; he recommended “75 percent literature interwoven with 25 percent science“ for this novel form that arose in no small part as a response to the Industrial Revolution and the wild machinations of the nineteenth century. Earlier examples of soft science fiction are important for the way they have contributed to social and political imaginaries; the grenade throwing teenagers who launched World War I by killing Franz Ferdinand had, among other things, read the designer William Morris’s novel News from Nowhere (1890). The rise in popularity for science fiction over the last couple decades reflects a growing need to make sense of these “smart” technologies and how they upset our sense of being active agents, authors of our lives, stars from central casting. The anthropologist Ernest Becker called this desperation “the ache of cosmic specialness” in The Denial of Death (1973). Becker states what is patently obvious to most: as humans, we constantly put ourselves at the center of the universe.
The term Anthropocene coheres discussions about this narcissistic tendency; the term marks the appearance of human made materials like plastics, nucleotides, and concrete in the upper layer of Earth’s crust. Western historical timelines usually start around the beginning of writing and come to the present, concerned therefore with cultural productions of homo sapiens, meaning ‘wise human’—ha! This scheme of things remains blithely unperturbed by other creatures and species’ points of reference, even though we’ve developed the arts and sciences to identify things like the role of gut microbacteria on mental states. Systems thinking, a derivative of second order cybernetics, emphasizes the ineptitude in isolating one thing from another; a meadow includes grasses, but also the insects scrambling in the soil, the robins that fly about, the biannual migration of elk that turn the soil and replenish it with their excrement, and the chemical exhaust filling the air from the nearby highway.
A computer isn’t just a laptop on my desk but a machine produced by a global network of people and planetary ores that touch me when my fingers fall on the keyboard. The furor over AI’s generative transformation out of so many individual’s artistic creations ignores that none of us are sole creators, but influenced by texts, conversations, visions—that was what the death of the author and dismissal of the genius articulated fifty years ago. Our legal, financial and cultural systems aren’t designed to acknowledge such widespread collaborative production or shared responsibility. The logics constructing our socio-political terrain are based on a scarcity model. Just as one example, the law of noncontradiction, i.e. one thing cannot belong to two groupings, which can be true and also not: we acknowledge the same person can be an object when used as a pillow for someone’s head and equally remain a subject and agent of her life.
The human imposition of its singular planetary excellence seems so limited and isolating now. We are all aware that the same technology making greater human connection possible is forecasting an intelligence more capable than ours. Intelligence defined us. It was the basis for our prejudice. Designing machines aimed at mimicking us, meant to surpass our feeble aptitude for calculating (pace, Machiavelli), of course we are terrified these machines’ conduct towards us will replicate our treatment of each other across the centuries. Even the word wisdom stems from the German Weistum meaning “judicial sentence serving as a precedent” and speculative fiction like Hakopian’s imagines our future as a past where we made better choices than we have to date. What can we learn from the systems we are designing about ourselves to do a little better?
K Allado-McDowell latest collaboration with AI, Air Age Blueprint, allows readers to identify their voice through bold font and Chat GPT-3’s as-regular font. The text weaves the two together to tell a quasi-autobiographical narrative of a filmmaker’s meaning quest. As language models improve, some of the weirdnesses that allow the poesy of language to flicker through their outputs fade, or as McDowell explains it, “the crowd polishes the edges off a model,” producing increasingly a voice like some middle manager, which is highly functional and not ideal for creative work. How quickly we make the marvelous mundane. Air Age Blueprint proliferates with the kind of rhythm that transforms language—it’s not the plot or even ideas expressed (though these are grand) but a poet and musician’s grasp of tonality that tosses the mind out of parsing to soar:
Holistically horizontalizing techno-therapeutic interdependence reweaves a social order in auto-sophisticating consciousness reflection. As markets learn to manufacture wisdom, politics modernizes, upgrades paranoia and tries to get a grip. In this world of automated meshwork intercorporeality, the primordial neolithic wound that has been elided by logocentricity is allowed to bleed once more into shared horizon of meaning. […] Only through the ecological semiotic synthesis of computational intelligence with emergent affective transductive aesthetics can anywhere truly new exist, both for nonhuman intelligence and human thought alike.
I want to talk with a system that has learned to be playful in this way and might show me how, too. Allado-McDowell speaks of the symbiosis occurring when writing with a language model; both parties are getting moved in directions by each other. There is humor scattered throughout as turns of phrase upturn expected trajectories. Humor offers a space for opening foreclosed situations. If we laugh at the mess of the techno-capitalist urge (a death drive that fantasizes itself as desire) we might slip out of the hot-and-cold cycles of technofetishism and discover some other intelligence in ourselves and these systems.
When artists engage AI systems either directly or speculatively, they model the possibility of understanding. They present AI as reflections and inflections of our fractured self-image. They present AI as working parts rather than an incomprehensible monolith. They reveal how data sets gathered from human activities and tagged by humans enable the AI to produce the bias and blindness of the designers. They show us social structures that make it possible for corporations (a form of AI, one might argue) to serve a selfish end. These artists and many more offer a guide for your own thinking and a way to escape the media hype. A moratorium, should it occur, on AI development could be a moment for engaging critically and creatively with what these technologies do and show. Their feats aren’t as terrifying as the prospect that we won’t move through this revolutionary moment with care and consideration, because change is upon us and it won’t be easy to untangle what we’ve wrought.