With a serendipitous introduction to a community of artists, C3.ai DTI cybersecurity Principal Investigator Ben Zhao, computer science professor at the University of Chicago, dedicated his team to producing ways to protect original artwork from rampant AI reproduction. Their three inventions – Fawkes, Glaze, and Nightshade, all designed to evade or counter-program AI scraping – have established Zhao as a defender of artists’ rights in the era of Generative AI.
His novel work has been covered in the tech press, art press, and in major media outlets from MIT Technology Review, TechCrunch, and Wired, to Scientific American, Smithsonian Magazine, and the New York Times.
At the C3.ai DTI Generative AI Workshop in Illinois last October, Zhao gave a talk relating how this series of events unfolded. Here’s what he had to say. Listen to the entire talk here.
(Excerpted and edited for length and clarity.)
IN 2020, we built this tool called Fawkes, which, at a high level, is an image-altering sort-of filter that perturbs the feature space of a particular image, shifting the facial recognition position of that image into a different position inside the feature space. That tool got a bit of press and we set up a user mailing list.
We were starting to look at the potential downsides and harms of Generative AI in general deep learning. That’s when the news about Clearview AI came out, the company that scraped billions of images from online, social media, and everywhere else, to build facial recognition models for roughly 300 million people globally. They’re still doing this, with numbers significantly higher than that now.
Last summer, we got this interesting email – we still have it – from this artist in the Netherlands, Kim Van Dune. She wrote, “With the rise of AI learning on images, I wonder if Fawkes can be used on paintings and illustrations to warp images and render them less useful for learning algorithms.”
An interesting question, but at the time we had no idea what was going on in Generative AI and this question made no sense. Why do you need to protect art? We wrote back, “I’m sorry, Kim, this is only for facial recognition. We don’t know how to apply this for art, but thanks for reaching out.” Kind of a useless reply. When all the news hit about DALL-E 2, Stable Diffusion, and Midjourney, one day in the lab, Shawn walked over to me and said, “Ben, is this what they were talking about, that email from that artist?” And we’re like, “Okay, maybe that’s it.”
We went back to Kim to ask what was going on. And we got an invite to an online townhall of artists, in November. I jumped on that call not knowing what to expect. There are some big artists there and successful professionals in the field – including people who worked for major movie studios – about five to six hundred people, talking about how their lives had been upended in the last two or three months by Generative AI. This was a complete shock to us. Right after this call, I remember thinking, “Okay, we should do something. I think there is a technological solution to do something about this.”
Over the next couple of months, we reached out to Karla Ortiz and a few other artists to enlist their help connecting us to the artist community. We did a user study. First, we said, “Okay, I think we can do what we did with Fawkes, this idea of perturbation in the feature space while maintaining visible similarity to the original.” Of course, that’s really challenging, because in the art space, you would imagine artists – fine artists, creatives, professionals – would care quite a bit about how much you perturb their art, and let you get away with it. And we weren’t sure we could do this because obviously fusion models are quite different from discriminative classifiers like DNNs [Deep Neural Networks]. Also, our style is this weird and fuzzy sort of feature space that we weren’t sure held the same rules as something like feature space for a facial recognition feature effect.
We tried this, built an initial prototype, and conducted a massive user study with more than 1,100 professional artists. So many signed up because this is obviously dear to their hearts. By February, we had completed the study, submitted a paper, and picked up some press coverage, including the New York Times. A month later, we built the first version of what became known as Glaze, into a software release. By July, we had a million downloads. By August, we presented at a user security conference. There were awards as well, the Internet Defense Prize and a paper award.
We had released this desktop app, but it took us a while to realize that artists don’t have a lot of money, and most of them don’t have GPUs at their disposal. Many of them don’t even have desktop computers, and if they do, they’re woefully out of date. So, we built a free web service sitting on our GPU servers to do the computation for them.
One of the things that’s interesting about this whole process is what we learned. The first question that came up was, “Should we deploy something?” For me, this was a no-brainer because the harms were so severe and immediate. I was literally talking to people who were severely depressed and had anxiety attacks because of what was going on. It seemed like the stakes were extremely high and you had to do something because there was something that we could do. Turns out many people feel differently.
A number of people in the security community said, “Why would you do this? Don’t. If it’s at all imperfect, if it can be broken in months, years, you’re offering a false sense of security. Can it be future-proof?” But nothing is future-proof, right? Give it 10-20 years, I don’t even know if Generative AI models will be around. Who knows? They will probably be greatly different from they are now.
We decided on this weird compromise: We made a free app, but offline. Many artists were already paranoid to run more AI on their art. We had to walk this fine line between transparency and gaining trust from the artists.
So what happened after that? A lot of good things. The artist’s reaction globally was really insane. For a while there we got so many emails we couldn’t answer them all. Globally speaking, a lot of artists now use Glaze on a regular basis. A number of art galleries online still post signs that say, “Closed while we Glaze everything,” because Glazing can take a while. More than that, artists have been extremely helpful in helping us develop Glaze, with everything from the app layout to logo color schemes, everything has had a ton of input from artists. Some have even taken money out of their own pocket to advertise for Glaze – really quite unexpected.
The minute Glaze was out the door we started working on Nightshade – a poison attack in the wild. The paper came out last week.
Epilogue: The free Nightshade program, released on January 19, 2024, was downloaded 250,000 times within the first five days.
Sampling of news stories:
FAWKES
This Tool Could Protect Your Photos From Facial Recognition
New York Times – August 3, 2020
GLAZE
UChicago scientists develop new tool to protect artists from AI mimicry
University of Chicago News – February 15, 2023
NIGHTSHADE
This new data poisoning tool lets artists fight back against generative AI
MIT Technology Review – October 23, 2023