Washington Post: ‘Prompt injection’ attacks haven’t caused giant problems yet. But it’s a matter of time, researchers say.
Software developers and cybersecurity professionals have created tests and benchmarks for traditional software to show it’s safe enough to use. Right now, the safety standards for LLM-based AI programs don’t measure up, said Zico Kolter, who co-wrote the prompt injection paper.
Zico Kolter, an associate professor in the School of Computer Science at Carnegie Mellon University, is a C3.ai DTI Principal Investigator in the field of cybersecurity.
Read the article here. See the paper here.
Illustration by Elena Lacey/The Washington Post