The Trolls, 2019 Silkscreen print 58 × 58 in.The Trolls, 2019 Silkscreen print 58 × 58 in. Detail
As part of broader efforts to manage cyber bullying and online trolls, AI researchers are attempting to create algorithms that automatically detect what kinds of online content constitute “trolling.” This piece is made from a dataset designed to teach AI algorithms what kinds of language patterns are typical of online trolling. Viewers should be warned that the piece contains some exceptionally offensive content culled from online forums.
One of the most common applications of Artificial Intelligence is to do automatic object-recognition and image-captioning. When you upload an image to Facebook or other social media, powerful Artificial Intelligence algorithms can recognize the identities of people in images, the objects, the products and even the places depicted in those images. AIs are taught how to recognize objects by giving them “training sets.”
Vampire (Corpus: Monsters of Capitalism) Adversarially Evolved Hallucination, 2017 Dye sublimation print 60 × 48 in. in.
A training set will consist of thousands or even millions of images organized into pre-sorted “classes” that correspond to each of the kinds of objects that the AI will eventually be able to distinguish. For example, if you want to train an AI to recognize all the objects in a kitchen, you might give it a thousand pictures of a fork, a spoon, a knife, a countertop, a frying pan, a pot, etc… Once that AI is trained, you can give it a picture of a fork that it has never seen before and it should be able to recognize it as a fork. For this body of works, I created massive training sets based on literature, philosophy, folk-wisdom, history, and other “irrational” things, then taught the AIs to recognize things from those corpuses. Some examples include: “Interpretations of Dreams,” an AI that has been trained to see – and only see – symbols from Freudian psychoanalysis; “Omens and Portents,” an AI that can only see things like comets, eclipses, and other signs of bad things to come; and “American Predators,” an AI that sees various predatory animals, plants, and humans indigenous to the United States as well as military hardware like predator drones and stealth bombers.
Highway of Death (Corpus: The Aftermath of the First Smart War) Adversarially Evolved Hallucination, 2017
A Man (Corpus: The Humans) Adversarially Evolved Hallucination, 2017
Porn (Corpus: The Humans) Adversarially Evolved Hallucination, 2017
A War Without Soldiers (Corpus: Eye Machine) Adversarially Evolved Hallucination, 2017
A Prison Without Guards (Corpus: Eye Machine) Adversarially Evolved Hallucination, 2017
Comet (Corpus: Omens and Portents) Adversarially Evolved Hallucination, 2017
Rainbow (Corpus: Omens and Portents) Adversarially Evolved Hallucination, 2017
Angel (Corpus: Spheres of Heaven) Adversarially Evolved Hallucination, 2017
Once an AI has been “trained” to see all the objects in a particular corpus, I try to get it to “hallucinate” an image of something it’s been trained to see. This is done by creating a second AI network, whose job it is to draw shapes. The two AIs then play a little game. The “drawing” AI (also called a “Generator”) tries to draw pictures that will fool the AI that’s been trained to “see” or to discriminate between particular objects (this is the AI we trained; we can call it the “Discriminator.”) The two AIs go back and forth thousands or millions of times, until the Generator has learned how to make images that can reliably “fool” the Discriminator. The images that come out of this process are called Hallucinations. Together, the AIs have evolved an image that is entirely synthetic and has no referent in reality, but that the pair of AIs believe are examples of things they’ve been trained to see.
Installation view of A Study of Invisible Images at Metro Pictures, New York, 2017.
This video installation is composed of images from two sources. The photographic images in the video are parts of training libraries used to teach artificial intelligence networks how to recognize objects, faces, gestures, relationships, emotions, and much more. They are images designed to teach machines “how to see.” The second kind of images in this video installation show what a Deep Neural Network (an Artificial Intelligence architecture) is actually “seeing” when it ingests these images.
Detail of Behold these Glorious Times!, 2017 Video still Single channel color video projection 10minDetail of Behold these Glorious Times!, 2017 Video still Single channel color video projection 10min
In order to “learn” how to recognize images, the AI breaks images into hundreds of component parts and tries to put them back together. In the black-and-white grids and images in this video, we are seeing the various ways that the AI is pulling the images apart in order to try to make sense of them. Overall in this installation we are seeing images being used to teach AI, and seeing what the AI is seeing when it looks at them and tries to make sense of them. The music for this piece, composed by Holly Herndon, was crafted in part by using neural networks designed to synthesize voices and instruments, and from training libraries used in speech recognition and other auditory machine learning applications.
Behold these Glorious Times!, 2017 Single channel color video projection 10min Installation view at Metro Pictures, New York, 2017.
“Fanon” (Even the Dead Are Not Safe) Eigenface, 2017 Dye sublimation print 48 × 48 in.
A standard technique in facial recognition software is to use an algorithm to create a “faceprint” of a given person and to use that faceprint to try and match a person’s face with photos. To grossly oversimplify, if you want to teach an algorithm how to distinguish a particular person (say Fanon) from a collection of other people, you need to have a big collection of photographs of people’s faces, with everyone’s faces labeled. You then take all the faces of Fanon, align them so their eyes and mouth are in the same place, and average them together. Then you take the average of all the other faces in the collection and average them together. If you subtract the average image of all the other people from the average of Fanon’s face, you end up with a “faceprint” for Fanon showing what distinguishes him from everyone else in the dataset. You can then use this faceprint to identify any future images of Fanon’s face that you might come across. These “portraits” translate those faceprints (which in their ‘native’ form are mathematical abstractions) into an image that human eyes can recognize as a face.
I’ve done a number of portraits of historical activists, artists, and philosophers in this way including portraits of Simone de Beauvoir, Samuel Beckett, and Simone Weil.
Timothy H. O’Sullivan Shoshone Falls, Snake River, Idaho, View across the Top of the Falls, 1874 Albumen silver print Smithsonian American Art Museum.
This is a photograph of an iconic location in the history of Western landscape photography. The 19th Century photographer Timothy O’Sullivan famously shot these falls on a survey mission for the American Department of War. His images of this waterfall are some of his most iconic works and some of the most well-known images of western landscape photography in general. My image is a close-up of the falls, with two computer vision algorithms overlaid on it. One algorithm is looking for points that imply the existence of underlying lines, a computer vision technique used in selfdriving cars and in robotics more generally. The second algorithm is finding shapes in the waterfall that it believes are faces.
These two pieces are made out of hundreds of portraits of artist Hito Steyerl and sound artist and composer Holly Herndon that have been analyzed by various facial-analysis algorithms. Below each picture is the output of algorithms attempting to detect their age, gender, and emotional state. Other algorithms attempt to determine whether they are wearing glasses, are smiling, or have a beard.
Machine Readable Hito, 2017 Installation view at Metro Pictures, New York, 2017Machine Readable Hito, 2017 Adhesive wall material.Machine Readable Holly, 2018 Installation view at Museo Tamayo Arte Contemporáneo, Mexico City, 2018Machine Readable Holly, 2018 Adhesive wall material.
One of the earliest tasks that neural networks and Artificial Intelligence could do reliably well was to recognize written numbers. These sorts of number-recognition systems are ubiquitous, as anyone who’s ever had an ATM automatically read the handwritten numbers of a deposited check knows. Megalith is made out of nearly 70,000 handwritten digits that represent one of the original collections of images that number-recognition systems were built upon. I think of this piece as a kind of Rosetta stone, an interface between two languages: that of written human numbers, and the language of artificial intelligence. But in this piece, the “translation” is an invisible, machinic interpretation of these numbers, and is inaccessible to our human senses.
“It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth.” So begins the last paragraph of Charles Darwin’s On the Origin of Species. For Darwin, life is not reducible to bits of genetic code or DNA, but is something in constant flux, where mutation is everything and fixed categories are exceptionally misleading. There is a certain irony in the title of Darwin’s book: his ideas strongly suggest that the notion of species (as a stable category) is a poor way to think about life forms. Moreover, the notion that particular life forms have clear origins in the entangled mass of mutation and flux is similarly untenable.
The Last Pictures (An Entangled Bank), 2012 C-Print 48 × 60 in.
There is more information on The Last Pictures project here.
And there is also a publication about this project, which you can find here.
In 1879 in Baden, Germany, Father Johann Martin Schleyer created a universal language at the behest of God, speaking to Schleyer in a dream. He called this new language Volapük or “World Speak.” Volapük was a simple language meant to give Catholic readers from different linguistic backgrounds an easier time reading aloud from the Bible.
The Last Pictures (The Dictionary of Volapük), 2012 C-Print 48 × 60 in.
Within ten years nearly one million people were conversing in the language. Volapük-specific publications were widely available, textbooks about the language were published in twenty-five languages, and Volapük societies proliferated across Europe. Yet Volapük’s popularity as a universal language was eclipsed by the rise of Esperanto in the early twentieth century. Esperanto expressions began mocking Volapük; in Esperanto “that sounds like Greek to me” became “that sounds like Volapük to me,” and “Volapukaĵo” became a synonym for nonsense.
In 1805, Antonio de Narbona led an expedition of Spanish soldiers, accompanied by allied Native Americans, into Canyon de Chelly in the Navajo Nation to attack the Navajo tribe. When the Navajo learned of Narbona’s impending arrival, they scaled the canyon’s vertical cliffs, finding refuge in a cave where the Spanish could not reach them. Narbona’s men fired upward; bullets ricocheting from the walls of the cave took the lives of all of those inside. The cave is now known as “Massacre Cave.” Although the Spanish claimed to have taken the lives of ninety Navajo warriors in addition to twenty-five women and children, the Navajos recall the dead to have been mostly women, children, and the elderly, as the men were away hunting during the Spanish invasion.
The Last Pictures (The Narbona Panel; Humans Seen Through a Predator Drone), 2012 Silver gelatin print 24 × 32 in.
The massacre is depicted in the Navajo pictograph in Canyon de Chelly. The image shows the Spanish cavalry wearing flat-brimmed hats and long winter capes. Soldiers on horseback carry muskets, followed by a priest wrapped in elaborate robes.
There is more information on The Last Pictures project here.
And there is also a publication about this project, which you can find here.