In the last article, we discussed how integrating AI into business-critical systems opens up enterprises to a new set of risks with AI security and AI safety. Rather than reinventing the wheel or relying on fragmented, improvised approaches, organizations should build on established standards and best practices to stay ahead of cybercriminals and other adversaries.

To manage these challenges, enterprises need to adopt a formal approach by using a set of frameworks that map AI threats, define controls, and guide responsible adoption. In this article, we’ll explore the evolving AI security and safety threat landscape, drawing from leading efforts such as MITRE ATLAS, NIST, OWASP, and others.

Note: Before diving into frameworks, it’s important to understand differences between AI security and AI safety. Check out our previous article, which provides key characteristics and examples of each.

MITRE ATLAS: Mapping AI threats

The Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) from MITRE is one of the most comprehensive resources for AI-specific attack techniques. Similar to the well-known MITRE ATT&CK framework for cybersecurity, ATLAS catalogs tactics, techniques and procedures that adversaries employ to exploit machine learning (ML) systems, including:

  • Data poisoning: corrupting training data to manipulate outcomes
  • Model evasion: crafting inputs to trick models into misclassification
  • Model theft: replicating a proprietary model through repeated queries

Enterprises can use MITRE ATLAS to anticipate adversary tactics and integrate AI threat modeling into existing red-teaming and penetration testing practices.

 MITRE ATLAS matrix

NIST AI Risk Management Framework (AI RMF)

The NIST AI RMF provides a structured methodology for managing AI risks across the lifecycle. Its core functions—Map, Measure, Manage, and Govern—help organizations identify risks, measure their likelihood and impact, and put controls in place.

Key considerations include:

  • Governance practices for trustworthy AI
  • Alignment with ethical principles
  • Risk-based prioritization for AI deployments

This framework is particularly useful for enterprises building a holistic AI governance program.

AI RMF Core

NIST Adversarial Machine Learning (AML) taxonomy

To complement the AI RMF, NIST also offers an AML taxonomy that categorizes different attack surfaces in AI and ML systems. It identifies:

  • Evasion attacks during inference
  • Poisoning attacks during training
  • Extraction and inversion attacks targeting model confidentiality

This taxonomy helps enterprises translate AI security and AI safety risks into familiar categories for cybersecurity teams.

Taxonomy of attacks on GenAI systems

OWASP AI Exchange

Open Web Application Security Project (OWASP), known for its web security guidance, has launched multiple initiatives in the AI security and safety space. Two of these are the AI Security & Privacy Guide and the OWASP AI Exchange. These resources focus on AI application development with enhanced security, addressing:

  • Insecure model configuration
  • Supply chain risks in AI pipelines
  • AI-specific vulnerabilities in APIs and model endpoints

It’s important to highlight 2 documents that are variants of the popular OWASP Top 10 Web Vulnerabilities but in this case, applied to AI security: OWASP Machine Learning Security Top Ten and the OWASP Top 10 for Large Language Model Applications. For developers, OWASP provides actionable checklists to embed security into the AI software development lifecycle.

Threat model with controls

ISO/IEC standards for AI

At the international level, ISO/IEC JTC 1/SC 42 develops AI standards covering governance, lifecycle management, and risk. ISO/IEC 42001:2023 is the first international standard specifically designed for AI management systems (AISM), like 9001 is for quality management systems (QMS) and 27001 is for information security management systems (ISMS). It provides a structured framework for organizations to responsibly develop, deploy, and manage AI systems with a strong emphasis on ethical considerations, risk management, transparency, and accountability. 

While ISO/IEC 42001:2023 covers the entire AI management system, ISO/IEC 23894:2023 is laser-focused on a comprehensive framework for AI risk management. It complements general risk management frameworks by addressing the unique risks and challenges posed by AI, such as algorithmic bias, lack of transparency, and unintended outcomes. This standard supports the responsible use of AI by promoting a systematic, proactive approach to risk, enhancing trust, safety, and compliance with ethical and regulatory expectations.

These standards provide a globally recognized baseline that enterprises can align with, especially those operating in multiple jurisdictions.

ENISA AI Threat Landscape

The European Union Agency for Cybersecurity (ENISA) has mapped out AI-specific threats in its AI Threat Landscape. This includes not only adversarial attacks but also systemic risks like software supply chain vulnerabilities and ethical misuse.

ENISA’s mapping helps enterprises connect technical vulnerabilities to broader organizational risks.

EU breakdown of number of threats

Responsible AI standardization

Responsible AI considerations are essential so AI systems, especially powerful generative models, are developed and deployed in ways that are ethical, transparent, safe, and aligned with human values.

Besides “classic” technical security issues, the rapid development of AI technologies brings additional risks, such as misinformation, bias, misuse, and lack of accountability. To address these specific challenges, a community of industry experts under the Linux Foundation AI & Data Foundation has developed the Responsible Generative AI Framework (RGAF), that offers a practical, structured approach to managing responsibility in the development and use of generative AI (gen AI) systems. RGAF identifies 9 key dimensions of responsible AI, such as transparency, accountability, robustness, and fairness. Each dimension outlines relevant risks and recommends actionable mitigation strategies. 

RGAF complements existing high-level standards (such as ISO/IEC 42001:2023 and ISO/IEC 23894:2023, among others) by focusing specifically on gen AI concerns, and it aligns with global policies and regulations to support interoperability and responsible innovation,  based upon open source principles and tools.

AI security and safety frameworks

Conclusion

No single framework addresses the full scope of AI security and safety. Instead, enterprises should draw from multiple sources.

By blending these perspectives, organizations can create a holistic, defense-in-depth strategy that leverages existing cybersecurity investments while addressing the novel risks AI introduces.

Navigate your AI journey with Red Hat. Contact Red Hat AI Consulting Services for AI security and safety discussions for your business. 


关于作者

Ishu Verma is Technical Evangelist at Red Hat focused on emerging technologies like edge computing, IoT and AI/ML. He and fellow open source hackers work on building solutions with next-gen open source technologies. Before joining Red Hat in 2015, Verma worked at Intel on IoT Gateways and building end-to-end IoT solutions with partners. He has been a speaker and panelist at IoT World Congress, DevConf, Embedded Linux Forum, Red Hat Summit and other on-site and virtual forums. He lives in the valley of sun, Arizona.

Florencio has had cybersecurity in his veins since he was a kid. He started in cybersecurity around 1998 (time flies!) first as a hobby and then professionally. His first job required him to develop a host-based intrusion detection system in Python and for Linux for a research group in his university. Between 2008 and 2015 he had his own startup, which offered cybersecurity consulting services. He was CISO and head of security of a big retail company in Spain (more than 100k RHEL devices, including POS systems). Since 2020, he has worked at Red Hat as a Product Security Engineer and Architect.

UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Virtualization icon

虚拟化

适用于您的本地或跨云工作负载的企业虚拟化的未来