“This year, DTI awarded grants to candidates who submitted proposals about applying AI to detect and stop cyberattacks and keep critical infrastructure secure, ” writes author Shannon Flynn in the industry media outlet Dark Reading. “This work is timely, especially since several US government agencies recently released a joint warning against malware distributed by foreign adversaries with the sole purpose of disabling essential services.”
Highlighting 7 of the 24 awards, the article describes projects covering all university consortium partners, including research on Explainable AI, improving accuracy to counter “alarm fatigue,” fingerprinting techniques, forensics, an ML and AI cybersecurity stack for energy industries, DeFi security, and developing positive-reinforcement cyber-hygiene nudges.
Citing Stockholm’s KTH Royal Institute of Technology associate professor Cyrille Artho on how the size of large neural networks “makes them a ‘black box’ that even experts cannot fully understand,” the author quotes Artho about the team’s Explainable AI research.
“We need to provide users also with models that are simpler and based on approaches where a human can design or modify a model [that may be created by AI or a human], so it is small enough to be understood,” Artho says. “[It] is key that we do not just begrudgingly accept AI as something that is ‘smarter than us and probably right,’ but that we can follow and understand its decisions.”
Read the article here.