For the artists Holly Herndon and Mat Dryhurst, artificial intelligence (AI) is a creative instrument. Through projects such as Holly+, which allows anyone to sing with Herndon’s voice, they have turned individual expression into a collaborative endeavor, rethinking the role of the artist in the age of AI.

For The Call, a collaboration with Serpentine Arts Technologies, Herndon and Dryhurst have expanded their exploration of AI as a communal tool by drawing on the age-old ritual of group singing. They invited 15 choirs from across the UK to perform music from a specially composed songbook. The recordings created a dataset to train choral AI models, which now forms the basis of an immersive audio installation at London’s Serpentine North.

The installation not only engages visitors with a soundscape but invites call-and-response participation and prompts reflection on the ethics of AI and data ownership. A key element of The Call – which marks the 10th anniversary of Serpentine Arts Technologies – is a data trust experiment. It offers a possible model for distributing power between those who contribute to the training data – in this case the singers – and those who use the AI models.

Hans Ulrich Obrist: The pioneering work of Holly and Mat is a permanent inspiration. More than any other artists, they think about what AI does to the whole ecosystem – the artistic, the technical, the social, and the economic aspects. Maybe we could begin at the beginning. I wanted to ask you how you met, how you came to music, and how music came to you.

Holly Herndon: I guess I came to music first. I grew up in the American South and I was singing in a church choir, playing piano and guitar in the church. In the community that I grew up in, the church was very much your social life. So I grew up with music as a part of communion, and ritual, and coming together.

Mat Dryhurst: We met when I was working at a record label in London in the early 2000s, [where] we did a podcast. We got one lone email from a young woman who was very annoyed that we didn’t have a new podcast.

HH: They were late on their release date, so I was reminding them that they owed me a podcast.

MD: So we met in Berlin, and then we got married a year later. We’ve been married for 17 years now.

HUO: A project you have been working on for more than four years now is Holly+. You call Holly+ a digital twin and a vocal deepfake. Can you both talk a little bit about Holly+ and how the idea was born?

HH: Coming from a computer music background, as soon as I found out about machine learning for music and being able to create a kind of timbre footprint of my voice, I immediately wanted to understand what that would look like. We created basically my vocal digital twin, which is Holly+.

Then, the first time Mat sang through my voice, all of a sudden I had his kind of strange English accent. And it was this ‘aha’ moment of, wow, people can sing in real time, as me, and it sounds very similar to me. What does this open up for the world of performance? What if we were more flexible with our identities and allowed people to perform through us, and had these mutations between identities? What kind of new avenues could that open?

MD: We trained a machine to be able to sing as Holly, which isn’t novel. But the novel idea was, what if we gave that model to everybody to be able to use and wrapped that in a protocol that would share profits from any media created with her voice 50-50 back to Holly? And I think that idea itself, which we’ve been working on for many, many years now, has been borne out. There’s been a number of artists in the past year who have adopted a very similar approach.

HUO: You explained Holly+ as a vocal deepfake. And of course that made me think about ownership, about originality. And you said in an interview, Holly, that your own voice belongs to everyone. Yet at the same time, artists also need to be protected. Can you explain how you imagine this balance is going to work?

HH: I feel like one reason the voice is such a great medium to explore some of these ideas is because it is already inherently communal. You learn how to speak through mimicking the people around you. You learn language, you learn dialects. That’s a kind of communal organ that you then perform with agency as an individual. So there’s a blurring of the individual and the communal with the voice.

MD: It’s interesting because you can’t copyright a voice. Personality rights, publicity rights do kick in. If I were selling a CD and labelled it as being by Taylor Swift, I would get into a lot of trouble. But the voice itself isn’t copyrightable.

We’re not free-information ideologues, but we need IP [intellectual property] standards that understand and digest the role of the transformer. We’ve been using this term that our friend Jay Springett came up with, ‘permissive IP,’ which I think summarizes the approach that we took with Holly+.

HUO: When we had our first meeting about The Call, you said the exhibition should explore the dark corridors of what it means to be an artist in the AI age.

HH: We’ve taken on the challenge of trying to exhibit a machine learning model and all of its facets. We focus so much on the models and their outputs, but we often forget to talk about all the training data that goes into the models. So that’s something we’re focusing on for the show – showing the training data as art-making itself, as human-generated art-making itself.

MD: We’ve been saying for many years that all media, all gestures, need to be understood through the lens of training data. All that training data is somehow incidental – it’s a new dimension that is placed on top of preexisting artworks, preexisting gestures. What if we were to train machines deliberately and see the media we produce as being a deliberate part of creating models?

HH: We like to think about the training data as children that we’re sending into the future because it will be training models for decades and decades to come.

HUO: Holly, you said in an interview recently, ‘The model is the artwork. It’s not the sculpture or the painting. It’s the model that can generate infinite artworks, in any kind of medium.’ So how do you exhibit that? That’s probably the central question of the exhibition. Does that require new governance structures between the institution and the artist exhibiting that work?

HH: One of the great things about working with Future Art Ecosystems at the Serpentine is that they’re passionate about some of these super-nerdy questions, like data governance. How can we get a group of people together who have similar data – that together could be really valuable – to create the specific kind of model that we’re looking for?

MD: As with our practice so far, we attempt to write a lot of our own code, we build our own infrastructures to facilitate the artworks we make. That is also taking place with Future Art Ecosystems. We’re looking not only to present a new kind of exhibition but also provide infrastructure – an open protocol – for others to use going forward.

HUO: Could you tell us more about the IP and how the choirs will co-own the work?

HH: AI models require large amounts of data to work well. They are collective accomplishments that require experiments in collective ownership and compensation. The data trust experiment establishes a framework for all the participating choirs to govern the terms of the dataset’s use. If the experiment works, such an approach could scale to other kinds of data.

We feel an interesting path forward for AI data is to train base models on public domain data –something we are doing at [our digital space] Spawning.ai – and then fine-tune those models on bespoke datasets owned by individuals or groups who can earn from their contributions.

HUO: In the exhibition, materials for training AI are presented as new artifacts for gathering and ritual, co-designed by the architecture studio sub. Tell us about the collaboration with sub and the display in the Serpentine show.

MD: We are all interested in how machine learning might interact with space and materials, so their collaboration on a technical, conceptual, and personal level has really elevated everything. Somehow such a positive sum collaboration complements the spirit of the show.

HUO: The objects they are making for the exhibitions – are they objects, non-objects, hyper-objects or quasi-objects?

HH: They are closest to quasi-objects, as they are active in a larger networked protocol, coordinating data contributions and serving as an archive of the models.

HUO: I do have one very last question, which is the only recurring question in all my artist conversations, which is to ask you to tell us about your favorite unrealized projects.

HH: Well, one of our previously unrealized projects is actually being realized right now at the Serpentine!

Credits and Captions

Hans Ulrich Obrist is Serpentine’s Artistic Director.

Serpentine Arts Technologies is Kay Watson, Head of Arts Technologies; Eva Jäeger, Curator and Creative AI Lead (Curator of The Call), Tamar Clarke-Brown, Curator, Arts Technologies, Vi Trinh, Assistant Curator; Victoria Ivanova, R&D Strategic Lead; Ruth Waters, Producer, Arts Technologies; Tommie Introna, R&D Platform Producer.

This interview is based on ‘Infinite AI’, a conversation between Holly Herndon, Mat Dryhurst, and Hans Ulrich Obrist which took place at the DLD Munich 2024.

Images: Holly Herndon and Mat Dryhurst conducting a recording session with London Contemporary Voices in London, 2024. Courtesy: Foreign Body Productions.

Published on October 2, 2024.