AI Experts Call for Policy to Avoid Extreme Risks

October 30, 2023

TIME: Twenty-four AI experts, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, released a paper calling on governments to take action to manage risks from AI. The policy document focused on extreme risks, such as enabling criminal or terrorist activities. Concrete policy recommendations include ensuring that major tech companies devote at least one-third of AI R&D budgets to promoting safe, ethical AI use and call for national and international standards.

This statement differs from previous expert-led open letters, says UC Berkeley’s Stuart Russell, because “Governments have understood that there are real risks. They are asking the AI community, ‘What is to be done?’ The statement is an answer to that question.” Co-authors include historian Yuval Harari and MacArthur “genius” grantee Dawn Song, UC Berkeley professor of computer science — and DTI Principal Investigator on cybersecurity.

Read the article here. Read the paper, “Managing AI Risks in an Era of Rapid Progress,” here.

Illustration by Lon Tweeten for TIME magazine