A dozen students from Sweden and California switched places last week for summer research internships focused on digital transformation, kicking off a new exchange between KTH Royal Institute of Technology and University of California, Berkeley.

The addition of UC Berkeley to the annual Digital Futures Summer Research Internship (SRI) Program at KTH is one of the first steps toward closer collaboration between the two universities. Such exchanges are among the aims of an agreement KTH President Anders Söderholm and UC Berkeley Vice Provost for Academic Planning Lisa Alvarez-Cohen signed one year earlier at the UC Berkeley campus.

After a June 5 welcome event at the KTH Library, students from UC Berkeley got busy with their KTH supervisors on projects in energy, intelligent tutoring systems, robotics, transportation and artificial intelligence. They will be working under the supervision of KTH researchers who are affiliated with Digital Futures, a cross-disciplinary research centre based at KTH which is dedicated to shaping an economically, environmental and socially sustainable society through digital transformation.

In the Bay Area, KTH students will also be working on generative AI, marine energy and hybrid vehicles under the supervision of research leaders at UC Berkeley.

See the full KTH story, “KTH and UC Berkeley students switch places for eight-week research internship.”

From left, UC Berkeley students Wentinn Liao, Nicholas Jennings, Verona Teo, Samuel Bobick, Daisy Kerr and Giuseppe Perona – David Callahan photo for KTH

C3.ai DTI’s quarterly newsletter covers news of the Institute’s Principal Investigators and digital transformation research around the consortium. You can sign up to receive the newsletter here.

The spring edition covers this news:

  • Is Robotics Having its ChatGPT moment?
  • Greening Precious Metals Extraction
  • New American Academy of Arts & Sciences members
  • U of I Grainger Launches New Siebel School of Computing & Data Science
  • Princeton Eviction Lab Researcher Speaks to City Arts & Lectures
  • Recent C3.ai DTI P.I. Publications

See the Spring 2024 C3.ai DTI Newsletter pdf here.

Photo by Fred Zwicky, UIUC

C3.ai DTI energy research led by Principal Investigator Qianwen Xu of the KTH Royal Institute of Technology in Stockholm, Sweden, has resulted in the development of AI algorithms to prevent power grid failure when electrification is increasingly supplied by variable sources like solar and wind.

“Wind power and solar radiation are not consistent from hour to hour,” says Xu. “And demand for charging EVs is based on people’s personal needs and habits. So, you have a high level of stochastics and uncertainties. Their integration will lead to voltage fluctuations, deviations and even voltage security violation challenges.” The new open-source deep reinforced learning (DRL) algorithms are designed to solve this challenge.

The open-source software package is published in GitHub.

Read the full TechXplore story: “Researchers design open-source AI algorithms to protect power grid from fluctuations caused by renewables and EVs.”

See the IEEE Transactions on Sustainable Energy research paper: “Data Driven Decentralized Control of Inverter based Renewable Energy Sources using Safe Guaranteed Multi-Agent Deep Reinforcement Learning.”

Photo by David Callahan, KTH Royal Institute of Technology

With a serendipitous introduction to a community of artists, C3.ai DTI cybersecurity Principal Investigator Ben Zhao, computer science professor at the University of Chicago, dedicated his team to producing ways to protect original artwork from rampant AI reproduction. Their three inventions – Fawkes, Glaze, and Nightshade, all designed to evade or counter-program AI scraping – have established Zhao as a defender of artists’ rights in the era of Generative AI.

His novel work has been covered in the tech press, art press, and in major media outlets from MIT Technology Review, TechCrunch, and Wired, to Scientific American, Smithsonian Magazine, and the New York Times.

At the C3.ai DTI Generative AI Workshop in Illinois last October, Zhao gave a talk relating how this series of events unfolded. Here’s what he had to say. Listen to the entire talk here.

(Excerpted and edited for length and clarity.)

UChicago Professor Ben Zhao showing samples of synthetic art at his C3.ai DTI presentation in fall 2023.

IN 2020, we built this tool called Fawkes, which, at a high level, is an image-altering sort-of filter that perturbs the feature space of a particular image, shifting the facial recognition position of that image into a different position inside the feature space. That tool got a bit of press and we set up a user mailing list.

We were starting to look at the potential downsides and harms of Generative AI in general deep learning. That’s when the news about Clearview AI came out, the company that scraped billions of images from online, social media, and everywhere else, to build facial recognition models for roughly 300 million people globally. They’re still doing this, with numbers significantly higher than that now.

Last summer, we got this interesting email – we still have it – from this artist in the Netherlands, Kim Van Dune. She wrote, “With the rise of AI learning on images, I wonder if Fawkes can be used on paintings and illustrations to warp images and render them less useful for learning algorithms.”

An interesting question, but at the time we had no idea what was going on in Generative AI and this question made no sense. Why do you need to protect art? We wrote back, “I’m sorry, Kim, this is only for facial recognition. We don’t know how to apply this for art, but thanks for reaching out.” Kind of a useless reply. When all the news hit about DALL-E 2, Stable Diffusion, and Midjourney, one day in the lab, Shawn walked over to me and said, “Ben, is this what they were talking about, that email from that artist?” And we’re like, “Okay, maybe that’s it.”

We went back to Kim to ask what was going on. And we got an invite to an online townhall of artists, in November. I jumped on that call not knowing what to expect. There are some big artists there and successful professionals in the field – including people who worked for major movie studios – about five to six hundred people, talking about how their lives had been upended in the last two or three months by Generative AI. This was a complete shock to us. Right after this call, I remember thinking, “Okay, we should do something. I think there is a technological solution to do something about this.”

Over the next couple of months, we reached out to Karla Ortiz and a few other artists to enlist their help connecting us to the artist community. We did a user study. First, we said, “Okay, I think we can do what we did with Fawkes, this idea of perturbation in the feature space while maintaining visible similarity to the original.” Of course, that’s really challenging, because in the art space, you would imagine artists – fine artists, creatives, professionals – would care quite a bit about how much you perturb their art, and let you get away with it. And we weren’t sure we could do this because obviously fusion models are quite different from discriminative classifiers like DNNs [Deep Neural Networks]. Also, our style is this weird and fuzzy sort of feature space that we weren’t sure held the same rules as something like feature space for a facial recognition feature effect.

We tried this, built an initial prototype, and conducted a massive user study with more than 1,100 professional artists. So many signed up because this is obviously dear to their hearts. By February, we had completed the study, submitted a paper, and picked up some press coverage, including the New York Times. A month later, we built the first version of what became known as Glaze, into a software release. By July, we had a million downloads. By August, we presented at a user security conference. There were awards as well, the Internet Defense Prize and a paper award.

We had released this desktop app, but it took us a while to realize that artists don’t have a lot of money, and most of them don’t have GPUs at their disposal. Many of them don’t even have desktop computers, and if they do, they’re woefully out of date. So, we built a free web service sitting on our GPU servers to do the computation for them.

One of the things that’s interesting about this whole process is what we learned. The first question that came up was, “Should we deploy something?” For me, this was a no-brainer because the harms were so severe and immediate. I was literally talking to people who were severely depressed and had anxiety attacks because of what was going on. It seemed like the stakes were extremely high and you had to do something because there was something that we could do. Turns out many people feel differently.

A number of people in the security community said, “Why would you do this? Don’t. If it’s at all imperfect, if it can be broken in months, years, you’re offering a false sense of security. Can it be future-proof?” But nothing is future-proof, right? Give it 10-20 years, I don’t even know if Generative AI models will be around. Who knows? They will probably be greatly different from they are now.

We decided on this weird compromise: We made a free app, but offline. Many artists were already paranoid to run more AI on their art. We had to walk this fine line between transparency and gaining trust from the artists.

So what happened after that? A lot of good things. The artist’s reaction globally was really insane. For a while there we got so many emails we couldn’t answer them all. Globally speaking, a lot of artists now use Glaze on a regular basis. A number of art galleries online still post signs that say, “Closed while we Glaze everything,” because Glazing can take a while. More than that, artists have been extremely helpful in helping us develop Glaze, with everything from the app layout to logo color schemes, everything has had a ton of input from artists. Some have even taken money out of their own pocket to advertise for Glaze – really quite unexpected.

The minute Glaze was out the door we started working on Nightshade – a poison attack in the wild. The paper came out last week.

Epilogue: The free Nightshade program, released on January 19, 2024, was downloaded 250,000 times within the first five days.

Sampling of news stories:

FAWKES
This Tool Could Protect Your Photos From Facial Recognition
New York Times – August 3, 2020

GLAZE
UChicago scientists develop new tool to protect artists from AI mimicry
University of Chicago News – February 15, 2023

NIGHTSHADE
This new data poisoning tool lets artists fight back against generative AI
MIT Technology Review – October 23, 2023

C3.ai DTI cybersecurity P.I. Sergey Levine of UC Berkeley co-authored an article in IEEE Spectrum describing how robots from around the world are sharing data on object manipulation to help work towards a general purpose robotic brain.

“In 2023, our labs at Google and the University of California, Berkeley came together with 32 other robotics laboratories in North America, Europe, and Asia to undertake the RT-X project, with the goal of assembling data, resources, and code to make general-purpose robots a reality,” authors write.

“As more labs engage in cross-embodiment research,” they conclude, “we hope to further push the frontier on what is possible with a single neural network that can control many robots. These advances might include adding diverse simulated data from generated environments, handling robots with different numbers of arms or fingers, using different sensor suites (such as depth cameras and tactile sensing), and even combining manipulation and locomotion behaviors. RT-X has opened the door for such work, but the most exciting technical developments are still ahead.”

Read it here.

Agri-View: According to national U.S. Department of Agriculture statistics, no-till and conservation tillage are increasing, with more than three-quarters of corn and soybean farmers opting for the practices to reduce soil erosion, maintain soil structure and save on fuel. However those estimates are based primarily on farmer self-reporting and are only compiled once every five years, potentially limiting accuracy.

In a new study funded in part by C3.ai DTI, University of Illinois Urbana-Champaign scientists led by Kaiyu Guan demonstrate a way to accurately map tilled land in real time by integrating ground, airborne and satellite imagery.

Read the story here.

Read the study, “Cross-scale sensing of field-level crop residue cover: Integrating field photos, airborne hyperspectral imaging, and satellite data,” in Remote Sensing of Environment here.

Washington Post: ‘Prompt injection’ attacks haven’t caused giant problems yet. But it’s a matter of time, researchers say.

Software developers and cybersecurity professionals have created tests and benchmarks for traditional software to show it’s safe enough to use. Right now, the safety standards for LLM-based AI programs don’t measure up, said Zico Kolter, who co-wrote the prompt injection paper.

Zico Kolter, an associate professor in the School of Computer Science at Carnegie Mellon University, is a C3.ai DTI Principal Investigator in the field of cybersecurity.

Read the article here. See the paper here.

Illustration by Elena Lacey/The Washington Post

TIME: Twenty-four AI experts, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, released a paper calling on governments to take action to manage risks from AI. The policy document focused on extreme risks, such as enabling criminal or terrorist activities. Concrete policy recommendations include ensuring that major tech companies devote at least one-third of AI R&D budgets to promoting safe, ethical AI use and call for national and international standards.

This statement differs from previous expert-led open letters, says UC Berkeley’s Stuart Russell, because “Governments have understood that there are real risks. They are asking the AI community, ‘What is to be done?’ The statement is an answer to that question.” Co-authors include historian Yuval Harari and MacArthur “genius” grantee Dawn Song, UC Berkeley professor of computer science — and C3.ai DTI Principal Investigator on cybersecurity.

Read the article here. Read the paper, “Managing AI Risks in an Era of Rapid Progress,” here.

Illustration by Lon Tweeten for TIME magazine

Politico: “Spoofing” looks like it’s here to stay as a feature of the new kind of warfare on display in Israel, Gaza, and Ukraine. Despite the lighthearted name, spoofing is a deadly serious missile-defense technique carrying risks beyond the battlefield. By using spoofing, Israeli forces can make it appear that an airplane, precision-guided missile, or any object that uses GPS is somewhere other than its true location. Israel is already using the technique to its full advantage.

Experts believe advanced weapons that use GPS will become more common in battle, so Israel using it now makes sense, and that its use in battle will increase.

A group of AI leaders is calling for an even bigger emphasis on safety in both the technology’s development and regulation.

Experts including Geoffrey Hinton, Dawn Song, and Andrew Yao published a paper [on October 24] warning that AI development runs the risk of getting ahead of humanity’s ability to control it, and that stricter safety controls are needed.

[UC Berkeley Professor of Computer Science Dawn Song is a C3.ai DTI Principal Investigator on cybersecurity.]

Read more here.

Abed Khaled/AP Photo