ImageNet Roulette was part of a broader project to draw attention to the things that can – and regularly do – go wrong when artificial intelligence models are trained on problematic training data. ImageNet Roulette is trained on the “person” categories from a dataset called ImageNet (developed at Princeton and Stanford Universities in 2009), oneContinue reading “ImageNet Roulette”
For a 2019 commission in the Barbican’s Curve Gallery in London, I took a close look at the most widely-used “training set” used in AI – ImageNet, a database of over 14 million images organized into more than twenty-thousand categories. The installation was made out of approximately 30,000 individually printed photographs, showing the precarious relationshipsContinue reading “From ‘Apple’ to ‘Anomaly’ (Pictures and Labels)”
I conceptualized this exhibition with Kate Crawford to tell a story about the history of images used to ‘recognize’ humans in computer vision and AI systems. We weren’t interested in either the hyped, marketing version of AI nor the tales of dystopian robot futures. We wanted to engage with the materiality of AI, and toContinue reading “Training Humans”
This is an article that I co-authored with my friend and collaborator Kate Crawford, who directs the AI Now Institute at NYU. In the article we take a look at some of the bad assumptions and bad politics built into the architecture of the training data used in AI systems.