ImageNet Roulette was part of a broader project to draw attention to the things that can – and regularly do – go wrong when artificial intelligence models are trained on problematic training data. ImageNet Roulette is trained on the “person” categories from a dataset called ImageNet (developed at Princeton and Stanford Universities in 2009), oneContinue reading “ImageNet Roulette”
I conceptualized this exhibition with Kate Crawford to tell a story about the history of images used to ‘recognize’ humans in computer vision and AI systems. We weren’t interested in either the hyped, marketing version of AI nor the tales of dystopian robot futures. We wanted to engage with the materiality of AI, and toContinue reading “Training Humans”
This is an article that I co-authored with my friend and collaborator Kate Crawford, who directs the AI Now Institute at NYU. In the article we take a look at some of the bad assumptions and bad politics built into the architecture of the training data used in AI systems.