When I visited the Victoria and Albert Museum in London in early June, a fabulous drag cabaret was in full swing. Across seven small screens and a large wall projection, a rotating cast of performers in an array of bold looks danced and lip-synced their hearts out to banger after banger. Highlights included Freedom! 90 by George Michael, Five Years by David Bowie and Beyoncé’s Sweet Dreams.
Then the whole thing started again. And again. But this wasn’t just a video installation running on a loop: it was an elaborately engineered deepfake. Between each song, the performers underwent a kind of metamorphosis, melting down into amorphous masses of pixels and then re-forming with new faces and figures. For these AI-generated drag kings and queens, life really is a cabaret.
The Zizi Show, by the London-based artist Jake Elwes, is the centrepiece of the V&A’s new suite of galleries devoted to photography. It’s a topical display, given the current buzz around artificial intelligence. There are now publicly available AI tools capable of producing not just drag cabarets but also award-winning photos, film scripts and newspaper articles (not this one, I promise). The latest version of the text generator ChatGPT was recently proved able to pass a legal bar exam. Meanwhile, corporations and states are also relying on artificial intelligence for uses as varied as medical diagnostics, the automotive industry and drone warfare.
Techno utopians have welcomed such advancements, but many others are warier. Concerns span from the hyperspecific to the existential, from practical issues such as disinformation, privacy and consent to Black Mirror-esque threats of machines replacing humans.
Against this backdrop, artists have started using AI critically in ways that Silicon Valley evangelists would never expect. While AI systems are widely figured as “black boxes” whose internal operations are unknowable even to their creators, these artists turned DIY programmers have been making works that help us to think more clearly about the uses and limits of a technology that sometimes appears worryingly boundless.
At the Science Gallery in London, the AI: Who’s Looking After Me? exhibition presents a range of projects that explore how developments in the technology are already affecting our lives. “AI is not a new thing,” the gallery’s director, Siddharth Khajuria, tells me. “All the hype around things like ChatGPT means we aren’t necessarily looking at how it’s currently being used in healthcare, border control, dating applications.”
Looking for Love (2023), an installation by a group of theatre-makers and artists called Fast Familiar, prompts us to consider whether a machine will ever know how it feels to experience love. A chatbot asks the user to provide information that will help it understand romantic partnership, so it can create the perfect matchmaking algorithm. When I spoke to it this summer, the chatbot asked me to select the squares containing “love” in a Captcha-style grid of images, the kind where you normally have to identify all the bicycles or boats to prove that you’re not a robot. My options included a cluster of juicy red raspberries, a still from the hit romance film The Notebook, and a slab of mincemeat shaped into a heart. Suddenly, the absurdity of attempting to teach a computer to recognise a concept as elusive as love was palpably clear.
Another display, Cat Royale (2023) by the Brighton-based art group Blast Theory, presents a video recording of an experiment in which three felines – Ghostbuster, Pumpkin and Clover – spent 12 days interacting with a robotic arm in a specially designed environment. In the recorded footage, we see the arm offering various treats and games to the cats. Throughout these activities, the happiness of the cats is measured by both a designated human observer – who used scoring methods developed by expert researchers – and an AI trained on thousands of video clips of cats. The robot then tweaks its suggested activities accordingly.
After watching the video, viewers are invited to fill out a survey about which domestic care responsibilities they would be willing to outsource to AI. Underlying the experiment is the question: can an AI system be trusted with important duties given that – as Blast Theory’s co-founder Matt Adams says in an introduction to the project – “these systems are not intelligent”? Rather, Adams suggests, “they are essentially data-processing machines that incorporate existing biases and distortions about the world and regurgitate them at enormous force”.
These “existing biases” have troubling implications. The 2020 documentary Coded Bias shows artist and researcher Joy Buolamwini putting a series of facial recognition algorithms that are regularly used by law enforcement agencies to the test. She wanted to know why they fail to accurately “see” darker-skinned faces, and discovered that this was because the data on which the facial recognition systems were trained contained a limited range of skin tones and face structures. People of colour were therefore at greater risk of being misidentified as criminals – as is the case with traditional policing methods. The poster for Coded Bias shows a haunting image of Buolamwini in a white mask, all the better for the AI to see her with.
Ironically, the promoters of emerging tech are usually keen for artists to take up these tools. As Khajuria explains: “There’s a desire to ‘mainstream’ these technologies, and a way to do that is through the cultural sector.” Sometimes, he adds, companies will sponsor exhibitions on the condition that their shiny new gadgets are used in some way.
But what if AI could be wrested from corporate forces, and more equitable systems developed? This possibility is what motivated Elwes to begin their work with drag and AI. The artist, who uses gender-neutral and male pronouns, noticed that computers struggled to recognise people with trans and gender-nonconforming identities – a consequence of the lack of diversity in the data being fed into AI systems. Elwes tells me they wanted to work out how to “reclaim this oppressive technology to uplift and celebrate our queer community”. So they began to stage photoshoots with drag kings and queens, and then trained AI systems on new data made up of the images and video footage. Crucially, the performers were paid for their time and signed consent forms outlining the conditions under which their likenesses could be used in future.
“As an artist, you’re in a unique position where you can take a step back and look at something critically, and it doesn’t need a purpose or a function. You have the space to think about alternative ways of using the technology,” says Elwes.
This sentiment is echoed by fellow London-based artist Alexandra Daisy Ginsberg, who first presented the AI-assisted Pollinator Pathmaker at the Eden Project in Cornwall in 2021 (further iterations have since appeared in London and Berlin). Collaborating with horticulturalists and scientists, the artist devised an algorithm to create garden planting schemes that support the maximum number of bees, butterflies and other pollinators for every square foot.
“I like to use technologies to understand why we make them and to identify issues that conventional uses might not identify,” Ginsberg tells me. The idea behind Pollinator Pathmaker was to harness a technology – which, like anything that requires computational power and hardware, has a non-trivial carbon footprint – for the benefit of the environment. “It’s also a perverse way to reflect on our preference in modern society for innovation rather than preservation and conservation,” says Ginsberg.
Of course, we can’t yet know whether the AI hype will turn out to be a bubble, like the clamour over so many other technologies before it. But, in the meantime, perhaps artists who are using the technology as both their medium and their subject can help to illuminate what otherwise may seem like a terrifyingly black box.
AI: Who’s Looking After Me is at the Science Gallery, London, to 20 January.