Top

海角视频

SRI Director David Lie and CS collaborators awarded $5.6 million for cutting-edge research on robust, secure, and safe AI

SRI Director David Lie is leading a team of 18 researchers in a new end-to-end analysis of the AI pipeline鈥攆rom data acquisition and security to model training, privacy protection, and beyond. (Photo by Shelby El Otmani. Courtesy of the Berkman Klein Center for Internet & Society鈥檚 Institute for Rebooting Social Media, Harvard University.)

SRI Director , who is cross-appointed to the 海角视频 of Computer Science, and 18 collaborators鈥攊ncluding five other SRI researchers鈥攚ill receive $5.6 million in grants over the next four years to develop solutions for critical artificial intelligence (AI) challenges.

The substantial funding, granted by the Natural Sciences and Engineering Research Council of Canada (NSERC) and Communications Security Establishment Canada (CSE), is earmarked for cutting-edge research on robust, secure, and safe AI.

The grant is part of a established in August 2023 between NSERC and CSE, aimed at bolstering research focused on four topics of strategic importance to CSE and the Government of Canada.

Lie's , "An End-to-End Approach to Safe and Secure AI Systems," is the first of the four that will aim to create methods to train AI models in situations where reliable data is unavailable, develop techniques to ensure AI models are robust, fair, and interpretable, and establish guidelines for AI use to ensure regulatory compliance.

Lie will lead a community of 18 researchers hailing from four Canadian universities: the University of Toronto, Concordia University, the University of Waterloo, and York University. Five of the researchers are affiliated with SRI and the 海角视频 of Computer Science: Associate Director , Research Lead , and Faculty Affiliates , , and .

海角视频 of Computer Science professors , , , , , , and are also researchers on the project.

Lie says with many researchers representing diverse areas of expertise, the project is ambitious in its scope and content.

鈥淥ur work will cover everything from privacy to interpretability to auditing the formal verification to data cleaning and management,鈥 says Lie. 鈥淪o it's not just everything in AI, but it's the entire AI pipeline鈥攆rom where you acquire data to make sure that it hasn't been tampered with, all the way through training to make sure that the training is protecting privacy.鈥

Lie says this substantial financial support of research particularly in the area of AI safety is very encouraging, and provides a unique opportunity in today鈥檚 context. As AI develops rapidly, experts around the world have increasingly stressed the importance of creating sound mechanisms and institutions to prevent misuse and mitigate risks.

In May of 2024, 25 experts including McIlraith, SRI advisory board member and Emeritus , and SRI Faculty Affiliates Gillian Hadfield and Tegan Maharaj published a in Science ahead of the AI Safety Summit in South Korea. The paper highlighted the world's lack of preparedness for AI risks and called for stronger actions in R&D and governance measures.

The paper emphasizes that AI systems can cause harm by eroding social justice and stability, inciting large-scale criminal activity, and facilitating automated warfare. These risks are expected to only amplify as companies work to further develop autonomous AI.

鈥淎I safety research is lagging,鈥 according to the paper. 鈥淗umanity is pouring vast resources into making AI systems more powerful but far less into their safety and mitigating their harms. Only an estimated 1 to 3 per cent of AI publications are on safety.鈥

To Lie, the grant marks a crucial milestone in advancing AI safety research in Canada. 

鈥淭raditionally, CSE does not provide a lot of funds for academic research. So the fact that this program exists and has so many resources for a crucial area says a lot. It's really good for Canada in general,鈥 he says.