MIT Technology Review: Researchers are using Gen AI and other techniques to teach robots new skills — including tasks they could perform in homes.

While engineers have made great progress in getting robots to work in tightly controlled environments like labs and factories, the home has proved difficult to design for. Out in the real, messy world, furniture and floor plans differ wildly; children and pets can jump in a robot’s way; and clothes that need folding come in different shapes, colors, and sizes. Managing such unpredictable settings and varied conditions has been beyond the capabilities of even the most advanced robot prototypes.

That seems to finally be changing, in large part thanks to artificial intelligence…

Last year, Google DeepMind kick-started a new initiative, the Open X-Embodiment Collaboration. The company partnered with 34 research labs and around 150 researchers to collect data from 22 different robots. The resulting data set, published Oct 2023, consists of robots demonstrating 527 skills, for example, picking, pushing, and moving.

Sergey Levine, a computer scientist at UC Berkeley who participated in the project, says the goal was to create a “robot internet” by collecting data from labs around the world. “This would give researchers access to bigger, more scalable, and more diverse data sets. The deep-learning revolution that led to the generative AI of today started in 2012 with the rise of ImageNet, a vast online data set of images. The Open X-Embodiment Collaboration is an attempt by the robotics community to do something similar for robot data… Early signs show that more data is leading to smarter robots.

Sergey Levine is a cybersecurity researcher for the Digital Transformation Institute.

Read the full story, “Is robotics about to have its own ChatGPT moment?

Photo: Peter Adams for MIT Technology Review

A new method safely extracts valuable metals locked up in discarded electronics and low-grade ore using dramatically less energy and fewer chemical materials than current methods, according to a new paper published in the journal Nature Chemical Engineering by University of Illinois Urbana-Champaign researchers led by Chemical and Biomolecular Engineering Professor Xiao Su, a DTI Principal Investigator.

Gold and platinum group metals such as palladium, platinum and iridium are in high demand for use in electronics. However, sourcing these metals from mining and current electronics recycling techniques is not sustainable and comes with a high carbon footprint. Gold used in electronics accounts for 8% of the metal’s overall demand, and 90 percent of the gold used in electronics ends up in U.S. landfills yearly, the study reports.

The study describes the first precious metal extraction and separation process fully powered by the inherent energy of electrochemical liquid-liquid extraction, or e-LLE. The method uses a reduction-oxidation reaction to selectively extract gold and platinum group metal ions from a liquid containing dissolved electronic waste.

Su said one of the many advantages of this new method is that it can run continuously in a green fashion and is highly selective in terms of how it extracts precious metals. “We can pull gold and platinum group metals out of the stream, but we can also separate them from other metals like silver, nickel, copper and other less valuable metals to increase purity greatly – something other methods struggle with.”

The team said that they are working to perfect this method by improving the engineering design and the solvent selection.

Read the full UIUC News article, “Electrochemistry helps clean up electronic waste recycling, precious metal mining.”

Read the study in Nature Chemical Engineering, “Redox-mediated electrochemical liquid-liquid Extraction (e-LLE) for selective metal recovery.” 

Photo by Fred Zwicky, UIUC DTI energy research led by Principal Investigator Qianwen Xu of the KTH Royal Institute of Technology in Stockholm, Sweden, has resulted in the development of AI algorithms to prevent power grid failure when electrification is increasingly supplied by variable sources like solar and wind.

“Wind power and solar radiation are not consistent from hour to hour,” says Xu. “And demand for charging EVs is based on people’s personal needs and habits. So, you have a high level of stochastics and uncertainties. Their integration will lead to voltage fluctuations, deviations and even voltage security violation challenges.” The new open-source deep reinforced learning (DRL) algorithms are designed to solve this challenge.

The open-source software package is published in GitHub.

Read the full TechXplore story: “Researchers design open-source AI algorithms to protect power grid from fluctuations caused by renewables and EVs.”

See the IEEE Transactions on Sustainable Energy research paper: “Data Driven Decentralized Control of Inverter based Renewable Energy Sources using Safe Guaranteed Multi-Agent Deep Reinforcement Learning.”

Photo by David Callahan, KTH Royal Institute of Technology

UC Berkeley CDSS News: On February 8,  Ziad Obermeyer, Blue Cross Distinguished Associate Professor of Health Policy and Management at Berkeley Public Health, warned the U.S. Senate Finance Committee about some of AI’s potential hazards within the healthcare field, and offered ways to ensure that AI systems are safe, unbiased and useful.

The hearing, “Artificial Intelligence and Health Care: Promise and Pitfalls,” explored the growing use of AI in medicine, and by federal health care agencies.

“Throughout my ten years of practicing medicine, I have agonized over missed diagnoses, futile treatments, unnecessary tests and more,” Obermeyer said. “The collective weight of these errors, in my view, is a major driver of the dual crisis in our healthcare system: suboptimal outcomes at very high cost. AI holds tremendous promise as a solution to both problems.” DTI Co-P.I. Ziad Obermeyer worked on the COVID-19 research project, “Using Data Science to Understand the Heterogeneity of SARS-COV-2 Transmission and COVID-19 Clinical Presentation in Mexico,” led by P.I. Stefano Bertozzi, Dean Emeritus and Professor of Health Policy and Management, at the University of California, Berkeley.

Read the full story, “Ziad Obermeyer testifies in U.S. Congress on how AI can help health care.”

Nature: There’s a revolution brewing in batteries for electric cars, which will rely on alternative designs to the conventional lithium-ion batteries that have dominated EVs for decades. Although lithium-ion is hard to beat, researchers think that a range of options will soon fill different niches of the market: some very cheap, others providing much more power.

“We’re going to see the market diversify,” says Gerbrand Ceder, a materials scientist at the University of California, Berkeley and the Lawrence Berkeley National Laboratory.

Lithium-ion batteries have improved a lot since the first commercial product in 1991: cell energy densities have nearly tripled, while prices have dropped by an order of magnitude3. “Lithium-ion is a formidable competitor… The biggest challenges are resource-related,” says Ceder, who calculates that the projected 14 TWh needed for cars by 2050 will require 14 million tonnes of total metal. DTI P.I. Gerbrand Ceder developed the novel NLP-driven COVID Scholar literature review site in 2020 with a DTI grant.

Read the Nature story: “The new car batteries that could power the electric vehicle revolution.”

Photo by Chris Ratcliffe/Bloomberg/Getty

UChicago News: The Medical Imaging and Data Resource Center (MIDRC), hosted at the University of Chicago, has been selected to participate in a new pilot program from the NSF to democratize AI research.

The National AI Research Resource (NAIRR) pilot will gather 10 federal agencies and 25 private sector, nonprofit, and philanthropic organizations to build a shared research infrastructure that will strengthen access to critical resources necessary to power responsible AI discovery and innovation. The project will provide access to advanced computing, datasets, models, software, training, and user support to U.S.-based researchers and educators and, as it continues to grow, will inform the design of a national AI ecosystem.

“The overall goal of MIDRC is to support the medical imaging AI ecosystem. We’ve built the infrastructure to house, curate, and organize medical images, we’ve collected a huge amount of real-world imaging data, and we’ve put forth a concerted effort to educate users about the development of algorithms and potential sources of bias,” said Maryellen L. Giger, PhD, the A.N. Pritzker Distinguished Service Professor of Radiology at UChicago and MIDRC principal investigator. “The collaborations and infrastructure that have been established provide a solid foundation for the creation of more medical imaging datasets and the development of AI algorithms for all sorts of use cases through this new NAIRR pilot program.”

The launch of MIDRC was spurred by the 2020 DTI COVID-19 research of Principal Investigator Maryellen Giger of UChicago, for the project, “Medical Imaging Domain-Expertise Machine Learning for Interrogation of COVID.”

Read the full story: “MIDRC selected for program to build national AI research infrastructure.”

With a serendipitous introduction to a community of artists, DTI cybersecurity Principal Investigator Ben Zhao, computer science professor at the University of Chicago, dedicated his team to producing ways to protect original artwork from rampant AI reproduction. Their three inventions – Fawkes, Glaze, and Nightshade, all designed to evade or counter-program AI scraping – have established Zhao as a defender of artists’ rights in the era of Generative AI.

His novel work has been covered in the tech press, art press, and in major media outlets from MIT Technology Review, TechCrunch, and Wired, to Scientific American, Smithsonian Magazine, and the New York Times.

At the DTI Generative AI Workshop in Illinois last October, Zhao gave a talk relating how this series of events unfolded. Here’s what he had to say. Listen to the entire talk here.

(Excerpted and edited for length and clarity.)

UChicago Professor Ben Zhao showing samples of synthetic art at his DTI presentation in fall 2023.

IN 2020, we built this tool called Fawkes, which, at a high level, is an image-altering sort-of filter that perturbs the feature space of a particular image, shifting the facial recognition position of that image into a different position inside the feature space. That tool got a bit of press and we set up a user mailing list.

We were starting to look at the potential downsides and harms of Generative AI in general deep learning. That’s when the news about Clearview AI came out, the company that scraped billions of images from online, social media, and everywhere else, to build facial recognition models for roughly 300 million people globally. They’re still doing this, with numbers significantly higher than that now.

Last summer, we got this interesting email – we still have it – from this artist in the Netherlands, Kim Van Dune. She wrote, “With the rise of AI learning on images, I wonder if Fawkes can be used on paintings and illustrations to warp images and render them less useful for learning algorithms.”

An interesting question, but at the time we had no idea what was going on in Generative AI and this question made no sense. Why do you need to protect art? We wrote back, “I’m sorry, Kim, this is only for facial recognition. We don’t know how to apply this for art, but thanks for reaching out.” Kind of a useless reply. When all the news hit about DALL-E 2, Stable Diffusion, and Midjourney, one day in the lab, Shawn walked over to me and said, “Ben, is this what they were talking about, that email from that artist?” And we’re like, “Okay, maybe that’s it.”

We went back to Kim to ask what was going on. And we got an invite to an online townhall of artists, in November. I jumped on that call not knowing what to expect. There are some big artists there and successful professionals in the field – including people who worked for major movie studios – about five to six hundred people, talking about how their lives had been upended in the last two or three months by Generative AI. This was a complete shock to us. Right after this call, I remember thinking, “Okay, we should do something. I think there is a technological solution to do something about this.”

Over the next couple of months, we reached out to Karla Ortiz and a few other artists to enlist their help connecting us to the artist community. We did a user study. First, we said, “Okay, I think we can do what we did with Fawkes, this idea of perturbation in the feature space while maintaining visible similarity to the original.” Of course, that’s really challenging, because in the art space, you would imagine artists – fine artists, creatives, professionals – would care quite a bit about how much you perturb their art, and let you get away with it. And we weren’t sure we could do this because obviously fusion models are quite different from discriminative classifiers like DNNs [Deep Neural Networks]. Also, our style is this weird and fuzzy sort of feature space that we weren’t sure held the same rules as something like feature space for a facial recognition feature effect.

We tried this, built an initial prototype, and conducted a massive user study with more than 1,100 professional artists. So many signed up because this is obviously dear to their hearts. By February, we had completed the study, submitted a paper, and picked up some press coverage, including the New York Times. A month later, we built the first version of what became known as Glaze, into a software release. By July, we had a million downloads. By August, we presented at a user security conference. There were awards as well, the Internet Defense Prize and a paper award.

We had released this desktop app, but it took us a while to realize that artists don’t have a lot of money, and most of them don’t have GPUs at their disposal. Many of them don’t even have desktop computers, and if they do, they’re woefully out of date. So, we built a free web service sitting on our GPU servers to do the computation for them.

One of the things that’s interesting about this whole process is what we learned. The first question that came up was, “Should we deploy something?” For me, this was a no-brainer because the harms were so severe and immediate. I was literally talking to people who were severely depressed and had anxiety attacks because of what was going on. It seemed like the stakes were extremely high and you had to do something because there was something that we could do. Turns out many people feel differently.

A number of people in the security community said, “Why would you do this? Don’t. If it’s at all imperfect, if it can be broken in months, years, you’re offering a false sense of security. Can it be future-proof?” But nothing is future-proof, right? Give it 10-20 years, I don’t even know if Generative AI models will be around. Who knows? They will probably be greatly different from they are now.

We decided on this weird compromise: We made a free app, but offline. Many artists were already paranoid to run more AI on their art. We had to walk this fine line between transparency and gaining trust from the artists.

So what happened after that? A lot of good things. The artist’s reaction globally was really insane. For a while there we got so many emails we couldn’t answer them all. Globally speaking, a lot of artists now use Glaze on a regular basis. A number of art galleries online still post signs that say, “Closed while we Glaze everything,” because Glazing can take a while. More than that, artists have been extremely helpful in helping us develop Glaze, with everything from the app layout to logo color schemes, everything has had a ton of input from artists. Some have even taken money out of their own pocket to advertise for Glaze – really quite unexpected.

The minute Glaze was out the door we started working on Nightshade – a poison attack in the wild. The paper came out last week.

Epilogue: The free Nightshade program, released on January 19, 2024, was downloaded 250,000 times within the first five days.

Sampling of news stories:

This Tool Could Protect Your Photos From Facial Recognition
New York Times – August 3, 2020

UChicago scientists develop new tool to protect artists from AI mimicry
University of Chicago News – February 15, 2023

This new data poisoning tool lets artists fight back against generative AI
MIT Technology Review – October 23, 2023

Nature Biotechnology: In this first-person piece, DTI COVID-19 researcher and UC Berkeley Professor of EECS and Bioengineering Jennifer Listgarten writes, “As a longtime researcher at the intersection of artificial intelligence (AI) and biology, for the past year I have been asked questions about the application of large language models and, more generally, AI in science. For example: ‘Since ChatGPT works so well, are we on the cusp of solving science with large language models?’ or ‘Isn’t AlphaFold2 suggestive that the potential of AI in biology and science is limitless?’ And inevitably: ‘Can we use AI itself to bridge the lack of data in the sciences in order to then train another AI?'”

Listgarten continues, “I do believe that AI — equivalently, machine learning — will continue to advance scientific progress at a rate not achievable without it. I don’t think major open scientific questions in general are about to go through phase transitions of progress with machine learning alone. The raw ingredients and outputs of science are not found in abundance on the internet, yet the tremendous power of machine learning lies in data — and lots of them.”

Read more here.

Two DTI researchers were quoted in Quanta about their work on autonomous driving.

Sayan Mitra, a computer scientist at the University of Illinois Urbana-Champaign leads a team that has managed to prove the safety of lane-tracking capabilities for cars and landing systems for autonomous aircraft. Their strategy is now being used to help land drones on aircraft carriers, and Boeing plans to test it on an experimental aircraft this year. “Their method of providing end-to-end safety guarantees is very important,” said Corina Pasareanu, a research scientist at Carnegie Mellon University and NASA’s Ames Research Center.

Their work involves guaranteeing the results of the machine-learning algorithms that are used to inform autonomous vehicles.

The aerospace company Sierra Nevada is currently testing these safety guarantees while landing a drone on an aircraft carrier. This problem is in some ways more complicated than driving cars because of the extra dimension involved in flying.

Read more here.

Image: Señor Salme for Quanta Magazine