Canada's proposed federal AI legislation, part of Bill C-27, establishing requirements for high-impact AI systems.
The Artificial Intelligence and Data Act (AIDA) is Canada's proposed federal AI legislation, introduced as Part 3 of Bill C-27, the Digital Charter Implementation Act. First tabled in June 2022, AIDA would establish a regulatory framework for AI systems in Canada, with a particular focus on "high-impact" AI systems.
AIDA is designed to promote responsible AI development and deployment while supporting Canada's position as a leader in AI innovation. The Act would establish requirements for organisations that design, develop, make available, or manage the operation of AI systems in the course of international or interprovincial trade and commerce.
The Act's centrepiece is the concept of "high-impact" AI systems, which would be defined through regulations rather than in the Act itself. This approach provides flexibility to adapt the definition as AI technology evolves but has been criticised for creating uncertainty about which systems will be captured.
Key obligations under AIDA would include: assessing whether AI systems are high-impact, establishing measures to identify and mitigate risks of harm or biased output, monitoring compliance with mitigation measures, maintaining records of risk assessments and mitigation measures, publishing plain-language descriptions of AI systems, and notifying the Minister of Innovation when AI systems may cause material harm.
AIDA would also create the position of an AI and Data Commissioner, responsible for administering and enforcing the Act. The Commissioner would have powers to audit compliance, issue orders, and recommend penalties. Penalties for serious violations could include fines of up to $10 million or 3% of global revenue for individuals, and up to $25 million or 5% of global revenue for organisations.
The legislative journey of AIDA has been protracted. Bill C-27 passed second reading in the House of Commons in April 2023 and was studied by the Standing Committee on Industry, Science and Technology (INDU). The committee proposed significant amendments, including strengthening individual rights, clarifying the definition of high-impact AI, and adding provisions for algorithmic transparency.
However, AIDA's future became uncertain following the prorogation of Parliament in January 2025, which killed Bill C-27 on the order paper. If the bill is re-introduced, it may be significantly revised. Despite this uncertainty, AIDA provides important signals about Canada's AI regulatory direction and organisations should monitor developments closely.
Canada also has existing laws that apply to AI, including PIPEDA (Personal Information Protection and Electronic Documents Act), the Canadian Human Rights Act, and sector-specific regulations. These create binding obligations for AI systems even in the absence of AIDA.
Assess whether AI systems qualify as 'high-impact' under regulatory criteria
Implement measures to identify, assess, and mitigate risks of harm from high-impact AI
Establish measures to address risks of biased output in AI systems
Monitor compliance with risk mitigation measures throughout AI system lifecycle
Maintain records of assessments, mitigation measures, and monitoring activities
Publish plain-language descriptions of high-impact AI systems
Notify the AI and Data Commissioner when AI systems may cause material harm
Comply with regulations to be developed under AIDA
Prohibition on AI systems that cause serious harm (physical or psychological)
Requirement to make AI systems available for audit by the Commissioner
No. AIDA was part of Bill C-27, which died on the order paper when Parliament was prorogued in January 2025. It may be reintroduced in a future session, potentially in revised form. However, existing Canadian laws (PIPEDA, Human Rights Act) already apply to AI systems.
AIDA delegates the definition of 'high-impact' to regulations, which have not yet been finalized. The companion document indicated categories like systems used in employment, lending, criminal justice, healthcare, and content moderation.
Yes. Many AIDA obligations align with international best practices. Implementing AI risk management, bias testing, and transparency measures is valuable regardless of AIDA's legislative status, and these practices support compliance with existing Canadian laws and international regulations.

Empowering businesses with safe, secure, and responsible AI adoption through comprehensive monitoring, guardrails, and training solutions.
Copyright ©. Aona AI. All Rights Reserved