This video installation is composed of images from two sources. The photographic images in the video are parts of training libraries used to teach artificial intelligence networks how to recognize objects, faces, gestures, relationships, emotions, and much more. They are images designed to teach machines “how to see.” The second kind of images in this video installation show what a Deep Neural Network (an Artificial Intelligence architecture) is actually “seeing” when it ingests these images.
In order to “learn” how to recognize images, the AI breaks images into hundreds of component parts and tries to put them back together. In the black-and-white grids and images in this video, we are seeing the various ways that the AI is pulling the images apart in order to try to make sense of them. Overall in this installation we are seeing images being used to teach AI, and seeing what the AI is seeing when it looks at them and tries to make sense of them. The music for this piece, composed by Holly Herndon, was crafted in part by using neural networks designed to synthesize voices and instruments, and from training libraries used in speech recognition and other auditory machine learning applications.