The NIST AI Risk Management Framework (AI RMF), published in January 2023, is a comprehensive guide designed to help organizations manage risks associated with AI systems throughout their lifecycle. It is voluntary, rights-preserving, non-sector-specific, and use-case agnostic.
The framework is organized around four core functions: GOVERN (establishing and maintaining AI risk management policies, roles, and culture), MAP (identifying context, categorizing AI systems, and mapping risks), MEASURE (analyzing, assessing, and tracking identified risks using quantitative and qualitative methods), and MANAGE (prioritizing and implementing risk treatment actions, monitoring effectiveness).
Key characteristics that the AI RMF promotes include: Valid and Reliable, Safe, Secure and Resilient, Accountable and Transparent, Explainable and Interpretable, Privacy-Enhanced, and Fair with Harmful Bias Managed.
The framework is widely referenced by organizations building AI governance programs and is increasingly used as a benchmark for AI compliance. It complements other standards like ISO/IEC 42001 (AI Management System) and the EU AI Act requirements.
