One of the most common applications of Artificial Intelligence is to do automatic object-recognition and image-captioning. When you upload an image to Facebook or other social media, powerful Artificial Intelligence algorithms can recognize the identities of people in images, the objects, the products and even the places depicted in those images. AIs are taught how to recognize objects by giving them “training sets.”

Vampire (Corpus: Monsters of Capitalism)
Adversarially Evolved Hallucination, 2017
Dye sublimation print
60 × 48 in.

A training set will consist of thousands or even millions of images organized into pre-sorted “classes” that correspond to each of the kinds of objects that the AI will eventually be able to distinguish. For example, if you want to train an AI to recognize all the objects in a kitchen, you might give it a thousand pictures of a fork, a spoon, a knife, a countertop, a frying pan, a pot, etc… Once that AI is trained, you can give it a picture of a fork that it has never seen before and it should be able to recognize it as a fork. For this body of works, I created massive training sets based on literature, philosophy, folk-wisdom, history, and other “irrational” things, then taught the AIs to recognize things from those corpuses. Some examples include: “Interpretations of Dreams,” an AI that has been trained to see – and only see – symbols from Freudian psychoanalysis; “Omens and Portents,” an AI that can only see things like comets, eclipses, and other signs of bad things to come; and “American Predators,” an AI that sees various predatory animals, plants, and humans indigenous to the United States as well as military hardware like predator drones and stealth bombers.

Once an AI has been “trained” to see all the objects in a particular corpus, I try to get it to “hallucinate” an image of something it’s been trained to see. This is done by creating a second AI network, whose job it is to draw shapes. The two AIs then play a little game. The “drawing” AI (also called a “Generator”) tries to draw pictures that will fool the AI that’s been trained to “see” or to discriminate between particular objects (this is the AI we trained; we can call it the “Discriminator.”) The two AIs go back and forth thousands or millions of times, until the Generator has learned how to make images that can reliably “fool” the Discriminator. The images that come out of this process are called Hallucinations. Together, the AIs have evolved an image that is entirely synthetic and has no referent in reality, but that the pair of AIs believe are examples of things they’ve been trained to see.

Installation view of A Study of Invisible Images at Metro Pictures, New York, 2017.