June 9, 2022

Anthony Joseph on Cybersecurity and Latency

by Kap Stann in Article

C3.ai DTI sat down with Anthony Joseph, Chancellor’s Professor of Electrical and Computer Engineering at UC Berkeley, a cybersecurity expert and new DTI awardee to ask him more about his project, “Scalable, Secure Machine Learning in the Presence of Adversaries.” See a clip of the interview here.

DTI: Tell us about your cybersecurity project.
AJ: We develop applications around secure machine learning – machine learning in the presence of adversaries – where you’re trying to make a security-sensitive decision. So those adversaries, or “bad actors,” are trying to influence or manipulate the outcomes of that decision-making. Some examples are spam filtering, malware detection, financial fraud detection, and so on. We’ve developed these algorithms, and now the challenge is: how do we deploy those algorithms in environments where we care about latency, the time it takes to make that decision?

DTI: Not the cloud?
Traditionally, we might think of deploying these algorithms in the cloud, however, the cloud can be relatively far from the edge of the network, where that actual decision is needed. In the case of robotics, where we’re doing path-planning, we need to make that decision very quickly. Because, say, we’ve got a moving robot arm, and the tens of milliseconds it takes to go to the cloud is far too long.

DTI: What are the alternatives?
So we deploy our algorithms at the edge, in these edge devices, but those are accessible to users. Now we’re talking about algorithms that may have taken millions or tens of millions of dollars to develop, along with the associated models and parameters, and we’re putting it out at the edge – how do we protect that code, those model parameters, and that data from adversarial manipulation… or just simply being able to look at it?

DTI: What results are you hoping for?
We’re really excited to be working with the C3 AI Suite and Azure’s Confidential Computing, which would allow us to secure the code, the data, and so on. What we learn from being able to securely deploy our algorithms at the edge or in the Cloud would enable us to learn how we would deploy in practice. When somebody has their own hardware, how would we actually deploy on that hardware in a way that protects the privacy of the model and the integrity of the model, and the data that the company may be pushing out?

DTI: What applications do you envision?
The work that we’re developing is really applicable to a broad range of areas. We’re starting with path-planning for robotics, very latency-sensitive. As an example, if you have network intrusion detection, that is typically done locally – again, private data, the company’s network data, you don’t want that to leave its premises. So instead, you’d like to take the algorithms that have been developed and do that anomaly detection and push them out to the edge.