90 Days Gen AI Risk Trial -Start Now
Book a demo
ActiveUnited StatesFrameworkEffective: 2023-01-26

NIST AI Risk Management Framework

A voluntary framework by the US National Institute of Standards and Technology for managing risks across the AI system lifecycle.

📋 Overview

The NIST AI Risk Management Framework (AI RMF 1.0), released on 26 January 2023, is a voluntary guidance document developed by the National Institute of Standards and Technology to help organisations design, develop, deploy, and use AI systems in a trustworthy and responsible manner. While not legally binding, the AI RMF has become a de facto standard for AI governance in the United States and is increasingly referenced in regulatory guidance worldwide.

The framework is structured around four core functions: Govern, Map, Measure, and Manage. These functions provide a flexible, structured approach to AI risk management that can be adapted to any organisation's size, sector, and risk tolerance.

The GOVERN function establishes and maintains the organisational structures, policies, and processes for AI risk management. It emphasises leadership accountability, stakeholder engagement, and the integration of AI risk management into broader enterprise risk management. This function recognises that effective AI governance requires cultural change, not just technical controls.

The MAP function is about understanding the context in which AI systems operate. It involves identifying and categorising AI systems, understanding their intended purposes and potential impacts, and recognising the broader societal context of AI deployment. Mapping also includes identifying relevant stakeholders and understanding the legal and regulatory landscape.

The MEASURE function focuses on employing quantitative and qualitative methods to analyse, assess, benchmark, and monitor AI risks and their related impacts. This includes developing metrics for trustworthiness characteristics such as accuracy, fairness, privacy, security, resilience, transparency, explainability, and accountability.

The MANAGE function involves allocating resources and implementing plans to respond to, recover from, and communicate about AI risks. It includes prioritising risks, implementing mitigation strategies, and establishing processes for ongoing monitoring and adjustment.

The companion document, the NIST AI RMF Playbook, provides suggested actions and references for each subcategory, making the framework highly practical for implementation. The Playbook is a living document that NIST updates as practices evolve.

The AI RMF was developed through an extensive multi-stakeholder process involving hundreds of organisations from industry, academia, civil society, and government. This collaborative development process has given the framework broad legitimacy and acceptance across sectors.

In the context of the US regulatory landscape, the AI RMF serves as a foundational reference. The 2023 Executive Order on AI Safety explicitly references the NIST AI RMF, and several US federal agencies have incorporated it into their AI governance guidance. State-level AI legislation, such as the Colorado AI Act, also references NIST standards.

⚖️ Key Requirements

1

GOVERN: Establish AI governance structures with clear roles, responsibilities, and accountability

2

GOVERN: Develop organisational AI risk management policies and processes

3

GOVERN: Foster a culture of responsible AI development and use

4

MAP: Inventory and categorise all AI systems by context, purpose, and risk

5

MAP: Identify intended and unintended impacts of AI systems on people and communities

6

MAP: Understand legal, regulatory, and ethical requirements for each AI system

7

MEASURE: Develop and apply metrics for AI trustworthiness characteristics

8

MEASURE: Assess AI system performance, fairness, bias, and reliability

9

MEASURE: Monitor AI systems for drift, degradation, and emerging risks

10

MANAGE: Prioritise identified AI risks based on impact and likelihood

11

MANAGE: Implement risk response strategies (mitigate, transfer, accept, avoid)

12

MANAGE: Establish incident response and communication plans for AI failures

13

Engage diverse stakeholders throughout the AI lifecycle

14

Document and communicate AI risk management activities and decisions

📅 Key Dates & Timeline

July 2021
NIST publishes Request for Information on AI RMF
March 2022
Initial draft AI RMF released for public comment
August 2022
Second draft released
26 January 2023
AI RMF 1.0 officially released
January 2023
AI RMF Playbook published alongside framework
October 2023
US Executive Order on AI references NIST AI RMF
April 2024
NIST releases companion AI RMF Generative AI Profile
2025
Ongoing updates to Playbook and profiles

🏢 Who It Affects

  • US federal agencies (referenced in Executive Order on AI)
  • AI developers and deployers seeking a governance framework
  • Organisations responding to US state-level AI legislation
  • Government contractors developing or procuring AI systems
  • Any organisation seeking to demonstrate responsible AI practices
  • International organisations looking for alignment with US AI governance expectations

Frequently Asked Questions

Is the NIST AI RMF legally mandatory?

The NIST AI RMF is a voluntary framework. However, it is referenced in the US Executive Order on AI and is increasingly used as a benchmark in federal procurement, state legislation, and industry standards. Adoption signals responsible AI practice.

How does the NIST AI RMF differ from the EU AI Act?

The NIST AI RMF is a voluntary risk management framework, while the EU AI Act is a binding law. The RMF provides flexible guidance for managing AI risks; the EU AI Act imposes specific legal obligations with penalties. Many organisations use both: the RMF for governance and the EU AI Act for legal compliance.

Does the NIST AI RMF address generative AI?

Yes. NIST released the Generative AI Profile in April 2024, providing specific guidance for managing risks associated with generative AI systems, including content provenance, information integrity, and novel risks like hallucination and CBRN information.

Empowering businesses with safe, secure, and responsible AI adoption through comprehensive monitoring, guardrails, and training solutions.

Copyright ©. Aona AI. All Rights Reserved