A voluntary framework by the US National Institute of Standards and Technology for managing risks across the AI system lifecycle.
The NIST AI Risk Management Framework (AI RMF 1.0), released on 26 January 2023, is a voluntary guidance document developed by the National Institute of Standards and Technology to help organisations design, develop, deploy, and use AI systems in a trustworthy and responsible manner. While not legally binding, the AI RMF has become a de facto standard for AI governance in the United States and is increasingly referenced in regulatory guidance worldwide.
The framework is structured around four core functions: Govern, Map, Measure, and Manage. These functions provide a flexible, structured approach to AI risk management that can be adapted to any organisation's size, sector, and risk tolerance.
The GOVERN function establishes and maintains the organisational structures, policies, and processes for AI risk management. It emphasises leadership accountability, stakeholder engagement, and the integration of AI risk management into broader enterprise risk management. This function recognises that effective AI governance requires cultural change, not just technical controls.
The MAP function is about understanding the context in which AI systems operate. It involves identifying and categorising AI systems, understanding their intended purposes and potential impacts, and recognising the broader societal context of AI deployment. Mapping also includes identifying relevant stakeholders and understanding the legal and regulatory landscape.
The MEASURE function focuses on employing quantitative and qualitative methods to analyse, assess, benchmark, and monitor AI risks and their related impacts. This includes developing metrics for trustworthiness characteristics such as accuracy, fairness, privacy, security, resilience, transparency, explainability, and accountability.
The MANAGE function involves allocating resources and implementing plans to respond to, recover from, and communicate about AI risks. It includes prioritising risks, implementing mitigation strategies, and establishing processes for ongoing monitoring and adjustment.
The companion document, the NIST AI RMF Playbook, provides suggested actions and references for each subcategory, making the framework highly practical for implementation. The Playbook is a living document that NIST updates as practices evolve.
The AI RMF was developed through an extensive multi-stakeholder process involving hundreds of organisations from industry, academia, civil society, and government. This collaborative development process has given the framework broad legitimacy and acceptance across sectors.
In the context of the US regulatory landscape, the AI RMF serves as a foundational reference. The 2023 Executive Order on AI Safety explicitly references the NIST AI RMF, and several US federal agencies have incorporated it into their AI governance guidance. State-level AI legislation, such as the Colorado AI Act, also references NIST standards.
GOVERN: Establish AI governance structures with clear roles, responsibilities, and accountability
GOVERN: Develop organisational AI risk management policies and processes
GOVERN: Foster a culture of responsible AI development and use
MAP: Inventory and categorise all AI systems by context, purpose, and risk
MAP: Identify intended and unintended impacts of AI systems on people and communities
MAP: Understand legal, regulatory, and ethical requirements for each AI system
MEASURE: Develop and apply metrics for AI trustworthiness characteristics
MEASURE: Assess AI system performance, fairness, bias, and reliability
MEASURE: Monitor AI systems for drift, degradation, and emerging risks
MANAGE: Prioritise identified AI risks based on impact and likelihood
MANAGE: Implement risk response strategies (mitigate, transfer, accept, avoid)
MANAGE: Establish incident response and communication plans for AI failures
Engage diverse stakeholders throughout the AI lifecycle
Document and communicate AI risk management activities and decisions
The NIST AI RMF is a voluntary framework. However, it is referenced in the US Executive Order on AI and is increasingly used as a benchmark in federal procurement, state legislation, and industry standards. Adoption signals responsible AI practice.
The NIST AI RMF is a voluntary risk management framework, while the EU AI Act is a binding law. The RMF provides flexible guidance for managing AI risks; the EU AI Act imposes specific legal obligations with penalties. Many organisations use both: the RMF for governance and the EU AI Act for legal compliance.
Yes. NIST released the Generative AI Profile in April 2024, providing specific guidance for managing risks associated with generative AI systems, including content provenance, information integrity, and novel risks like hallucination and CBRN information.

Empowering businesses with safe, secure, and responsible AI adoption through comprehensive monitoring, guardrails, and training solutions.
Copyright ©. Aona AI. All Rights Reserved