The first comprehensive US state-level AI law, requiring impact assessments and transparency for high-risk AI systems that make consequential decisions.
The Colorado Artificial Intelligence Act (SB 24-205), signed into law on 17 May 2024, is the first comprehensive state-level AI legislation in the United States. Set to take effect on 1 February 2026, it establishes obligations for developers and deployers of "high-risk AI systems" — AI systems that make or substantially support "consequential decisions" affecting Coloradans.
The Act defines "consequential decisions" broadly to include decisions that have a material legal or similarly significant effect on consumers in areas such as education, employment, financial services, government services, healthcare, housing, insurance, and legal services. This broad scope means many AI systems used in consumer-facing contexts will be covered.
A key concept in the Colorado AI Act is "algorithmic discrimination," defined as any condition in which the use of an AI system results in an unlawful differential treatment or impact that disfavours an individual or group based on protected characteristics including age, color, disability, ethnicity, genetic information, national origin, race, religion, sex, and veteran status.
For developers of high-risk AI systems, obligations include: providing deployers with documentation about the system's capabilities, limitations, and known risks; making available a summary of training data and known biases; publishing on their website a statement about the types of high-risk AI systems they develop and how they manage risks of algorithmic discrimination; and providing deployers with information needed to complete impact assessments.
For deployers of high-risk AI systems, obligations include: implementing a risk management policy and programme; completing an impact assessment for each high-risk AI system before deployment; providing consumers with notice that AI is being used to make consequential decisions; providing a statement about the purpose, nature, and limitations of the AI system; providing consumers an opportunity to correct incorrect personal data used by the AI system; and providing consumers an opportunity for human review (with appeal) of adverse consequential decisions.
The Act also requires deployers to notify the Colorado Attorney General within 90 days of discovering that a high-risk AI system has caused algorithmic discrimination. This notification requirement creates a strong incentive for ongoing monitoring and bias detection.
Enforcement is through the Colorado Attorney General, who has exclusive enforcement authority. There is no private right of action. The Attorney General can seek injunctive relief and civil penalties. Importantly, the Act provides an affirmative defence for developers and deployers who maintain reasonable compliance programmes, including compliance with recognized AI risk management frameworks like the NIST AI RMF or ISO 42001.
The Colorado AI Act has been influential, with several other US states introducing similar legislation. It represents a significant development in the patchwork of US AI regulation and provides a template that other states may follow or adapt.
Developers must provide deployers with system documentation, training data summaries, and risk information
Developers must publish a public statement about high-risk AI systems they develop
Deployers must implement a risk management policy and programme for high-risk AI
Deployers must complete impact assessments before deploying high-risk AI systems
Provide consumers notice that AI is used for consequential decisions
Provide consumers a description of the AI system's purpose and limitations
Allow consumers to correct inaccurate personal data used by AI systems
Offer human review and appeal for adverse AI-driven consequential decisions
Notify the Attorney General within 90 days of discovering algorithmic discrimination
Maintain documentation demonstrating compliance
Annual review and update of impact assessments
Affirmative defence available for reasonable compliance programmes
The Colorado AI Act takes effect on 1 February 2026. Organisations should already be preparing by inventorying AI systems, conducting impact assessments, and implementing risk management programmes.
A consequential decision is one that has a material legal or similarly significant effect on a consumer in areas including education, employment, financial services, government services, healthcare, housing, insurance, and legal services.
No. Only the Colorado Attorney General has enforcement authority. However, the Act requires notification to the AG of algorithmic discrimination, creating accountability. Compliance with NIST AI RMF or ISO 42001 provides an affirmative defence.

Empowering businesses with safe, secure, and responsible AI adoption through comprehensive monitoring, guardrails, and training solutions.
Copyright ©. Aona AI. All Rights Reserved