Popular Posts

Advanced AI Security Research Released in Communications of the ACM Magazine

Malicious AI Models Undermine Software Supply-Chain Security

Our latest research on the challenges associated with malicious AI models has been published in the Communications of the ACM magazine, titled “Malicious AI Models Undermine Software Supply-Chain Security.” This research was conducted in collaboration with Dr. Sherali Zedally, a professor at the University of Kentucky.

The increasing integration of Artificial Intelligence (AI) models into software development and deployment pipelines introduces a novel and potent threat to software supply chain security. Unlike traditional malware embedded in code, malicious AI models can subtly alter behavior, influence decisions, or exfiltrate data from within systems where they are expected to perform legitimate functions. This unique capability allows attackers to compromise software at a deeper, more insidious level, leveraging the model’s inherent complexity and predictive power for nefarious purposes, often without triggering conventional security alarms.

The primary vectors for introducing these compromised AI models are through tainted development tools, maliciously modified libraries, or pre-trained models sourced from untrusted repositories. Once integrated, these models can act as Trojan horses, executing unauthorized code, exfiltrating sensitive data, manipulating data integrity, or enabling unauthorized access to

critical systems. The article highlights an attack flow model, breaking down how sophisticated payloads can move through the supply chain to achieve their malicious objectives, posing a significant challenge to an organization’s overall cybersecurity posture.

Detecting and mitigating such threats is particularly challenging due to the inherent opacity and complexity of AI models. Their “black box” nature often makes it difficult to ascertain whether a model is behaving maliciously or simply performing as designed, especially when the malicious payload is subtle or designed to activate under specific, rare conditions. Furthermore, traditional signature-based security defenses are ill-equipped to identify these novel, behavior-driven attacks, as they do not rely on static code signatures or easily identifiable patterns. This limitation necessitates a fundamental shift in how organizations approach software supply chain security to account for these intelligent and adaptive adversaries.

To address these growing risks, organizations must implement a multi-layered defense strategy. Key recommendations include rigorously vetting and sourcing AI models only from trusted repositories, employing cryptographic validation to ensure model integrity, and maintaining strict, controlled access to third-party AI assets. Additionally, utilizing secure model serialization formats and sandboxing AI models within isolated execution environments are crucial steps to contain potential threats. The overarching message emphasizes the critical need for continuous vigilance, proactive security measures, and a deep understanding of the unique characteristics of AI-driven attack payloads to foster effective threat detection, prevention, and mitigation in this evolving threat landscape.

The complete magazine is available here.

Share Now :

About the author

Aditya K SoodAditya K Sood
Aditya K Sood (Ph.D) is the VP of Security Engineering and AI Strategy at Aryaka. With more than 18 years of experience, he provides strategic leadership in information security, covering products and infrastructure. Dr. Sood is interested in Artificial Intelligence (AI), cloud security, malware automation and analysis, application security, and secure software design. He has authored several papers for various magazines and journals, including IEEE, Elsevier, Crosstalk, ISACA, Virus Bulletin, and Usenix. He has been an active speaker at industry conferences and presented at Blackhat, DEFCON, HackInTheBox, RSA, Virus Bulletin, OWASP, and many others. Dr. Sood obtained his Ph.D. in Computer Science from Michigan State University. Dr. Sood is also the author of "Targeted Cyber Attacks," “Empirical Cloud Security,” and "Combating Cyberattacks Targeting the AI Ecosystem" books. He held positions such as Senior Director of Threat Research and Security Strategy, Head (Director) of Cloud Security, Chief Architect of Cloud Threat Labs, Lead Architect and Researcher, and others while working for companies such as F5 Networks, Symantec, Blue Coat, Elastica, and KPMG.