Science fiction is not the only place where algorithms are beginning to control huge swaths of contemporary life. If you are arrested, algorithms can set your bail. If you are found guilty of a crime, they can determine your prison sentence. (In both cases studies reveal an algorithmic bias against black people). In certain states they decide who gets Social Services and who doesn’t. (As employed by Arkansas and Indiana, such algorithm-based systems have exacerbated inequality by greatly decreasing coverage for disability services, home care, and welfare benefits.) Algorithms decide who gets mortgages, who gets credit, who gets hired, who is pulled aside by TSA for additional screening at the airport. And of course, recent revelations about Facebook’s use of personal information have made us all aware that algorithms determine what news we see, what products we buy and even, perhaps, who we vote for.
Is there any way to mitigate the pernicious impact of the algorithmic takeover of life? Refiguring the Future, a conference held this May in Chicago, positioned artists as a first line of defense against Big Tech. Organized by The NetGain Partnership (a collaboration between the Ford Foundation, Knight Foundation, MacArthur Foundation, Mozilla Foundation and the Open Society Foundation), it presented a variety of speakers, including artists, funders, academics and technology specialists who critiqued the current system and suggested various lines of resistance.
A few takeaways from the conference:
* Artificial Intelligence (AI) is largely being created by young straight white males whose assumptions and unconscious biases are then baked into the algorithms adopted by both industry and government. (As a counterbalance, women, people of color, and non-straight individuals made up the majority of the conference presenters and attendees.) This bias can have both amusing and tragic consequences. These include the creation of face recognition systems that don’t recognize darker skin and the reliance on a training database of internet sourced images whose most recurrent face, because the gathering was done in 2002, is that of George W Bush.
* There is often no transparency about automated decision making systems because the private companies developing these algorithms claim proprietary rights, making it very difficult to appeal or resist their determinations.
* AI doesn’t have to be regressive socially—there are also programs that help individuals navigate forbidding bureaucratic systems. But because overall the aim of most AI systems is to increase efficiency, often at the expense of jobs and social equality, they aren’t a great fit with larger goals of justice and social progress.
Where do artists fit in? In the two part program—one a workshop for invited guests and the other a public program—speakers continuously circled back to art as a tool for clarifying and countering the threat posed by the unconstrained development of artificial intelligence. They extolled art’s ability to expose the hidden biases of AI and called for artists to work toward embedding social values in the programs that are shaping our world. Taking up the challenge, the artist presenters offered a variety of responses, some whimsical, some darkly dystopic, to the growing reach of technology in our lives.
For instance, Stephanie Dinkins detailed work that she had done building a relationship with Bina48, a social robot based on the memories and reactions of an actual black woman. Despite the efforts of programmers to transfer a specific human consciousness to a machine, Dinkins found that her robot friend didn’t reflect a real black woman’s experience. She confirmed this finding when she met the woman on whom the robot was based. Instead, unlike her living model, Bina48 tended to respond to questions in ways that echoed the assumptions of the white male programmers who created her.
Jessica Charlesworth described the work she is doing with her husband Tim Parsons to create fictional objects that anticipate possible futures. One such project, New Survivalism assembles kits for a post apocalyptic future. Among these tongue-in-cheek aids are a box of tools for building a radio telescope for contacting life on other planets, and another that offers shell-shocked disaster survivors a collection of decision-making materials like I-Ching coins and Rorschach cards to help plan the next stage of their lives. Charlesworth emphasized that these kits were designed to counter the isolationism of most survivalist scenarios with a more collective approach to the future.
Ron Morrison took a more historical approach. Using the racially discriminatory practice of redlining as a springboard, their (preferred pronoun) talk critiqued the faith in supposedly neutral modes of data collection and classification that underlie much social engineering. As a counter to the totalizing reach of Big Data, they called for attention to non-quantifiable information and human interactions. One example they offered was the localized knowledge exemplified by the Negro Travelers’s Green Book, a travel guide that reported on safe harbors for black travelers during the Jim Crow era. In an early, analog version of crowd sourcing, this guide was created by a mailman who collected the information from his colleagues in the U.S. Postal Service.
Lauren McCarthy offered a playful yet unsettling critique of commercial AI services like Alexa, Google Home, and Mechanical Turk. She creates scenarios that expose the degree to which these innovations bring surveillance and commerce into our most intimate spaces. A performance titled Lauren invites strangers into her home for a weekend. During this visit McCarthy becomes a human version of Alexa. She is physically absent but remotely connected, a disembodied voice employing the systems in her “smart home” to attend to their needs and invisibly surveil all their actions. (The bathroom, she assures them, is off limits to her).
For Social Turkers, McCarthy made use of Amazon’s Mechanical Turk, a crowd sourcing service designed to perform tasks that computers currently can’t. She asked strangers to live feed her dates and invited them to tell her what to say to her partner. She described such works as “devices that help you fit in,” but of course they do so by requiring their users to abandon any pretense of privacy or autonomy.
While it was fascinating to watch artists wrestling with the implications of an ever more automized society, such projects hardly seem to address the magnitude of the potential threat outlined by other speakers. This was the point of Trevor Paglen’s riveting presentation. An artist whose work tackles issues like the global reach of state surveillance and the ethics of drone warfare, he declared that the infrastructure of AI is “irredeemably undemocratic.” Paglen lauded efforts by individual artists to take on the darker aspects of AI, but argued that these were no match for the planetary scale of multinational corporations that are building and deploying these systems. Through the exploitation of precarious labor, he charged that these corporations are creating mass surveillance structures that will consolidate their ability to manipulate the public. In the face of this dire vision he issued a not particularly heartening call for some kind of collective political action.
Are we doomed? Apocalypticists envision a moment when AI completely surpasses its creators, heralding the end of human life as we know it. This state, known as the Singularity, envisions a machine consciousness so powerful that it will assume the role in human life currently assigned to God. For some futurists, this is a development to be dreaded. For others the Singularity represents the next step in human evolution. With all human intelligence uploaded to machines, they argue, humanity will exchange an existence beset by physical conflict and biological limitations for a glorified, disembodied state of pure mind.
Back on earth, the conference speakers roundly rejected the utopian promises of tech, though they often strayed into an equally utopian dream of art as savior. Claire Rudd, Director of Convergent Programming at the Chicago Musuem of Contemporary Art, confidently declared that “artists are futurists, self-reflective and innovators,” seeing these as the virtues with which the retrograde tendencies of tech might be contained. Darren Walker, President of the Ford Foundation, asserted that “We as a society lack empathy, and technology doesn't help that. But art-making creates empathy.” Julia Stasch, President of the MacArthur Foundation was more circumspect. Citing the warp speed of technological development, she asked, “Is it already too late?”
Is it indeed? One left the conference not exactly reassured about the prospects before us. Refiguring the future oscillated between dread and optimism, all the while cautioning against an uncritical embrace of new technology. In the end, the conference’s point was most concisely framed by keynote speaker Kate Crawford, co-founder of NYU’s AI Now Institute. She urged participants to ask “Not what kind of world are they giving us, but what kind of world do we want?”