With a serendipitous introduction to a community of artists, C3.ai DTI cybersecurity Principal Investigator Ben Zhao, computer science professor at the University of Chicago, dedicated his team to producing ways to protect original artwork from rampant AI reproduction. Their three inventions – Fawkes, Glaze, and Nightshade, all designed to evade or counter-program AI scraping – have established Zhao as a defender of artists’ rights in the era of Generative AI.

His novel work has been covered in the tech press, art press, and in major media outlets from MIT Technology Review, TechCrunch, and Wired, to Scientific American, Smithsonian Magazine, and the New York Times.

At the C3.ai DTI Generative AI Workshop in Illinois last October, Zhao gave a talk relating how this series of events unfolded. Here’s what he had to say. Listen to the entire talk here.

(Excerpted and edited for length and clarity.)

UChicago Professor Ben Zhao showing samples of synthetic art at his C3.ai DTI presentation in fall 2023.

IN 2020, we built this tool called Fawkes, which, at a high level, is an image-altering sort-of filter that perturbs the feature space of a particular image, shifting the facial recognition position of that image into a different position inside the feature space. That tool got a bit of press and we set up a user mailing list.

We were starting to look at the potential downsides and harms of Generative AI in general deep learning. That’s when the news about Clearview AI came out, the company that scraped billions of images from online, social media, and everywhere else, to build facial recognition models for roughly 300 million people globally. They’re still doing this, with numbers significantly higher than that now.

Last summer, we got this interesting email – we still have it – from this artist in the Netherlands, Kim Van Dune. She wrote, “With the rise of AI learning on images, I wonder if Fawkes can be used on paintings and illustrations to warp images and render them less useful for learning algorithms.”

An interesting question, but at the time we had no idea what was going on in Generative AI and this question made no sense. Why do you need to protect art? We wrote back, “I’m sorry, Kim, this is only for facial recognition. We don’t know how to apply this for art, but thanks for reaching out.” Kind of a useless reply. When all the news hit about DALL-E 2, Stable Diffusion, and Midjourney, one day in the lab, Shawn walked over to me and said, “Ben, is this what they were talking about, that email from that artist?” And we’re like, “Okay, maybe that’s it.”

We went back to Kim to ask what was going on. And we got an invite to an online townhall of artists, in November. I jumped on that call not knowing what to expect. There are some big artists there and successful professionals in the field – including people who worked for major movie studios – about five to six hundred people, talking about how their lives had been upended in the last two or three months by Generative AI. This was a complete shock to us. Right after this call, I remember thinking, “Okay, we should do something. I think there is a technological solution to do something about this.”

Over the next couple of months, we reached out to Karla Ortiz and a few other artists to enlist their help connecting us to the artist community. We did a user study. First, we said, “Okay, I think we can do what we did with Fawkes, this idea of perturbation in the feature space while maintaining visible similarity to the original.” Of course, that’s really challenging, because in the art space, you would imagine artists – fine artists, creatives, professionals – would care quite a bit about how much you perturb their art, and let you get away with it. And we weren’t sure we could do this because obviously fusion models are quite different from discriminative classifiers like DNNs [Deep Neural Networks]. Also, our style is this weird and fuzzy sort of feature space that we weren’t sure held the same rules as something like feature space for a facial recognition feature effect.

We tried this, built an initial prototype, and conducted a massive user study with more than 1,100 professional artists. So many signed up because this is obviously dear to their hearts. By February, we had completed the study, submitted a paper, and picked up some press coverage, including the New York Times. A month later, we built the first version of what became known as Glaze, into a software release. By July, we had a million downloads. By August, we presented at a user security conference. There were awards as well, the Internet Defense Prize and a paper award.

We had released this desktop app, but it took us a while to realize that artists don’t have a lot of money, and most of them don’t have GPUs at their disposal. Many of them don’t even have desktop computers, and if they do, they’re woefully out of date. So, we built a free web service sitting on our GPU servers to do the computation for them.

One of the things that’s interesting about this whole process is what we learned. The first question that came up was, “Should we deploy something?” For me, this was a no-brainer because the harms were so severe and immediate. I was literally talking to people who were severely depressed and had anxiety attacks because of what was going on. It seemed like the stakes were extremely high and you had to do something because there was something that we could do. Turns out many people feel differently.

A number of people in the security community said, “Why would you do this? Don’t. If it’s at all imperfect, if it can be broken in months, years, you’re offering a false sense of security. Can it be future-proof?” But nothing is future-proof, right? Give it 10-20 years, I don’t even know if Generative AI models will be around. Who knows? They will probably be greatly different from they are now.

We decided on this weird compromise: We made a free app, but offline. Many artists were already paranoid to run more AI on their art. We had to walk this fine line between transparency and gaining trust from the artists.

So what happened after that? A lot of good things. The artist’s reaction globally was really insane. For a while there we got so many emails we couldn’t answer them all. Globally speaking, a lot of artists now use Glaze on a regular basis. A number of art galleries online still post signs that say, “Closed while we Glaze everything,” because Glazing can take a while. More than that, artists have been extremely helpful in helping us develop Glaze, with everything from the app layout to logo color schemes, everything has had a ton of input from artists. Some have even taken money out of their own pocket to advertise for Glaze – really quite unexpected.

The minute Glaze was out the door we started working on Nightshade – a poison attack in the wild. The paper came out last week.

Epilogue: The free Nightshade program, released on January 19, 2024, was downloaded 250,000 times within the first five days.

Sampling of news stories:

FAWKES
This Tool Could Protect Your Photos From Facial Recognition
New York Times – August 3, 2020

GLAZE
UChicago scientists develop new tool to protect artists from AI mimicry
University of Chicago News – February 15, 2023

NIGHTSHADE
This new data poisoning tool lets artists fight back against generative AI
MIT Technology Review – October 23, 2023

C3.ai DTI cybersecurity P.I. Sergey Levine of UC Berkeley co-authored an article in IEEE Spectrum describing how robots from around the world are sharing data on object manipulation to help work towards a general purpose robotic brain.

“In 2023, our labs at Google and the University of California, Berkeley came together with 32 other robotics laboratories in North America, Europe, and Asia to undertake the RT-X project, with the goal of assembling data, resources, and code to make general-purpose robots a reality,” authors write.

“As more labs engage in cross-embodiment research,” they conclude, “we hope to further push the frontier on what is possible with a single neural network that can control many robots. These advances might include adding diverse simulated data from generated environments, handling robots with different numbers of arms or fingers, using different sensor suites (such as depth cameras and tactile sensing), and even combining manipulation and locomotion behaviors. RT-X has opened the door for such work, but the most exciting technical developments are still ahead.”

Read it here.

Agri-View: According to national U.S. Department of Agriculture statistics, no-till and conservation tillage are increasing, with more than three-quarters of corn and soybean farmers opting for the practices to reduce soil erosion, maintain soil structure and save on fuel. However those estimates are based primarily on farmer self-reporting and are only compiled once every five years, potentially limiting accuracy.

In a new study funded in part by C3.ai DTI, University of Illinois Urbana-Champaign scientists led by Kaiyu Guan demonstrate a way to accurately map tilled land in real time by integrating ground, airborne and satellite imagery.

Read the story here.

Read the study, “Cross-scale sensing of field-level crop residue cover: Integrating field photos, airborne hyperspectral imaging, and satellite data,” in Remote Sensing of Environment here.

Washington Post: ‘Prompt injection’ attacks haven’t caused giant problems yet. But it’s a matter of time, researchers say.

Software developers and cybersecurity professionals have created tests and benchmarks for traditional software to show it’s safe enough to use. Right now, the safety standards for LLM-based AI programs don’t measure up, said Zico Kolter, who co-wrote the prompt injection paper.

Zico Kolter, an associate professor in the School of Computer Science at Carnegie Mellon University, is a C3.ai DTI Principal Investigator in the field of cybersecurity.

Read the article here. See the paper here.

Illustration by Elena Lacey/The Washington Post

TIME: Twenty-four AI experts, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, released a paper calling on governments to take action to manage risks from AI. The policy document focused on extreme risks, such as enabling criminal or terrorist activities. Concrete policy recommendations include ensuring that major tech companies devote at least one-third of AI R&D budgets to promoting safe, ethical AI use and call for national and international standards.

This statement differs from previous expert-led open letters, says UC Berkeley’s Stuart Russell, because “Governments have understood that there are real risks. They are asking the AI community, ‘What is to be done?’ The statement is an answer to that question.” Co-authors include historian Yuval Harari and MacArthur “genius” grantee Dawn Song, UC Berkeley professor of computer science — and C3.ai DTI Principal Investigator on cybersecurity.

Read the article here. Read the paper, “Managing AI Risks in an Era of Rapid Progress,” here.

Illustration by Lon Tweeten for TIME magazine

Politico: “Spoofing” looks like it’s here to stay as a feature of the new kind of warfare on display in Israel, Gaza, and Ukraine. Despite the lighthearted name, spoofing is a deadly serious missile-defense technique carrying risks beyond the battlefield. By using spoofing, Israeli forces can make it appear that an airplane, precision-guided missile, or any object that uses GPS is somewhere other than its true location. Israel is already using the technique to its full advantage.

Experts believe advanced weapons that use GPS will become more common in battle, so Israel using it now makes sense, and that its use in battle will increase.

A group of AI leaders is calling for an even bigger emphasis on safety in both the technology’s development and regulation.

Experts including Geoffrey Hinton, Dawn Song, and Andrew Yao published a paper [on October 24] warning that AI development runs the risk of getting ahead of humanity’s ability to control it, and that stricter safety controls are needed.

[UC Berkeley Professor of Computer Science Dawn Song is a C3.ai DTI Principal Investigator on cybersecurity.]

Read more here.

Abed Khaled/AP Photo

August 31, 2023

Forbes: In some ways, virology and pathology have implications for how we are going to benefit from the fruits of AI. For example, the accumulated data can be useful in any number of ways. Adding to what we have already seen on medical applications, there’s more from David K. Gifford, an MIT professor and CEO of a company called Think Therapeutics, which pioneers some of the types of research he’s talking about, including both genetics and immunology. Or rather, the intersection of those two disciplines.

“What drug has saved the most lives throughout history?” Gifford asks. “Now, you might think, penicillin or an antibiotic, right? That’s obvious. Not so. Vaccines as a drug class have saved more lives throughout history than any other drug class: A billion lives and counting.”

C3.ai DTI COVID P.I. David Gifford of MIT in a Forbes video about his groundbreaking T-cell vaccine.

Referring to “killer T-cells,” he shows us a video of one attacking some other cell by stabbing wildly at its surface, while also introducing models for working with an epitope, the part of an antigen molecule that an antibody attaches to, and alleles, variations in the sequencing of nucleotides in a long DNA molecule.

“The design system,” says Gifford, “introduces methods for designing new peptide vaccines, evaluating existing vaccines, and augmenting existing vaccine designs. In this system, peptides are scored through machine learning by their ability to be displayed to elicit an immune response, and are then selected to maximize population coverage of who could benefit from the vaccine.”

Closing out the presentation, Gifford brings the focus to a broader use of AI in this field:

“AI provides novel medical solutions,” he says. “It’s just beginning, not only for vaccines, but for other kinds of therapeutic modalities… it’s a very exciting time.”

Read the complete Forbes article here.

Related stories:

An interview with ‘ultimate’ COVID vaccine designer David Gifford of MIT

The ‘Ultimate’ COVID Vaccine

August 29, 2023

C3.ai DTI cybersecurity Principal Investigator Dawn Song, computer science professor at the University of California, Berkeley, has been named among the top 10 Most Successful Women Entrepreneurs in the Web3 industry by Techtopia.

“Dawn Song is a distinguished entrepreneur and computer science scholar known for her roles as CEO of Oasis Labs, a blockchain-based cloud computing program, and as a professor at UC Berkeley. Song has intensively researched deep learning, security, blockchain, and cryptography, with notable affiliations, including the Berkeley Artificial Intelligence Research Lab,” the citation reads.

The article highlights successful female entrepreneurs in the Web3 industry and the challenges they face in the male-dominated field. “To some extent, Web3 seems like one of the most diverse places to work in,” according to the editors. “However, being still closely linked to the financial industry, it is still highly male-dominated.”

Read the full article here.

Photo: Office of the Vice Chancellor for Research, University of California, Berkeley

July 27, 2023

New York Times: In a report released yesterday, Zico Kolter of Carnegie Mellon University led a team demonstrating how anyone could circumvent A.I. safety measures and use any of the leading chatbots to generate nearly unlimited amounts of harmful information.

“There is no obvious solution,” said Kolter. “You can create as many of these attacks as you want in a short amount of time.”

Kolter is a C3.ai DTI PI developing novel approaches to cybersecurity based on semidefinite programming. 

Read more here.

Pictured: Zico Kolter, right, a professor at Carnegie Mellon University, and Andy Zou, a CMU doctoral student. Photo by Marco Garcia for The New York Times.