A sweeping executive order establishing AI safety standards, reporting requirements, and agency guidance across the US federal government.
Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," was signed by President Biden on 30 October 2023. It is the most comprehensive US federal action on AI governance to date, establishing requirements across federal agencies and setting expectations that ripple through the private sector.
The Executive Order leverages existing authorities, particularly the Defense Production Act, to require companies developing the most powerful AI models to share safety test results and other critical information with the federal government. It directs NIST to develop standards for red-team testing of AI systems before public release.
Key provisions span eight priority areas: new safety and security standards for AI, protecting Americans' privacy, advancing equity and civil rights, standing up for consumers and workers, promoting innovation and competition, advancing American leadership abroad, ensuring responsible government use of AI, and addressing the needs of the AI workforce.
For the private sector, the most immediately impactful provisions include: reporting requirements for companies training large AI models (above certain compute thresholds), requirements for AI-generated content watermarking and authentication standards, guidance on AI use in critical infrastructure, and expectations for addressing algorithmic discrimination.
The EO directed numerous federal agencies to issue guidance and take action within specific timeframes. NIST was tasked with developing AI safety and security guidelines, the Department of Commerce with establishing reporting requirements for foundation models, HHS with AI in healthcare guidance, DOE with AI in critical infrastructure standards, and many others.
However, the status of EO 14110 became uncertain following the change in administration in January 2025. The incoming administration signalled a different approach to AI governance, potentially revoking or significantly modifying the EO. Organisations should monitor developments closely while recognising that many of the frameworks and standards developed under the EO (such as NIST's AI RMF extensions) retain independent value.
Regardless of the EO's status, many of its directives catalysed important work that continues: NIST AI standards development, AI safety research, and agency-specific AI governance frameworks. The EO also influenced state-level AI legislation and international AI governance discussions.
For compliance professionals, the EO's legacy includes the emphasis on red-team testing, the concept of compute-based thresholds for regulatory attention, the integration of AI governance into existing regulatory frameworks, and the recognition that AI governance requires whole-of-government coordination.
Companies developing dual-use foundation models above compute thresholds must report to the federal government
Share results of red-team safety tests with the government
AI-generated content watermarking and authentication standards development
Federal agencies must designate Chief AI Officers and implement AI governance
Agencies must conduct AI impact assessments for rights-impacting AI uses
Advance privacy-preserving AI techniques and research
Address algorithmic discrimination in AI systems
Develop AI safety standards through NIST
Protect workers from AI-related displacement and workplace surveillance
Support responsible AI use in healthcare, education, and criminal justice
The status of EO 14110 is subject to change with the new administration in 2025. While the EO may be revoked or modified, many of the standards and frameworks it catalysed (NIST AI RMF, agency guidance) continue independently. Check current status.
Directly, the EO's binding requirements primarily apply to federal agencies. However, it impacts the private sector through reporting requirements for large AI model developers, guidance that shapes industry standards, and influence on federal procurement and contracting.
The EO requires reporting for models trained using more than 10^26 integer or floating-point operations, or 10^23 operations for models primarily trained on biological sequence data. These thresholds capture only the most powerful models.

Empowering businesses with safe, secure, and responsible AI adoption through comprehensive monitoring, guardrails, and training solutions.
Copyright ©. Aona AI. All Rights Reserved