Free Template

ISO 42001 Gap Analysis Template
Are You Ready for AI Management System Certification?

ISO 42001 is the world's first international standard for AI Management Systems. Whether you are pursuing certification or simply want to assess your AI governance maturity, this template walks you through every clause — helping you identify gaps, prioritise remediation, and build an audit-ready evidence base.

Get ISO 42001 Readiness Help →What is ISO 42001?

How to Use This Template

1

Work through each clause with your AI governance team

2

Assign a maturity rating: Not Started / Partially / Largely / Fully Met

3

Document evidence and identify gaps with root cause

4

Build a remediation roadmap with owners and dates

Clause 4

Context of the Organisation

Understanding your AI context, stakeholders, and the scope of your AI Management System.

4.1 — Understanding the organisation and its context
Document internal and external factors affecting AI use: business objectives, regulatory environment, competitive landscape, and AI risk appetite.
Not StartedPartially MetLargely MetFully Met
4.2 — Understanding stakeholder needs
Identify all parties with an interest in your AI systems: customers, regulators, employees, suppliers, and society. Document their requirements and expectations.
Not StartedPartially MetLargely MetFully Met
4.3 — Determining the scope of the AI MS
Define the boundaries of your AI Management System. Which AI systems, processes, and organisational units are in scope? Document exclusions with justification.
Not StartedPartially MetLargely MetFully Met
4.4 — AI Management System
Establish, implement, maintain, and continually improve an AI MS in accordance with ISO 42001. Assign ownership and integrate with existing management systems.
Not StartedPartially MetLargely MetFully Met
Clause 5

Leadership

Top management commitment, AI policy, and organisational roles and responsibilities.

5.1 — Leadership and commitment
Evidence that top management actively supports the AI MS: assigns resources, integrates AI risk into enterprise risk management, and champions responsible AI.
Not StartedPartially MetLargely MetFully Met
5.2 — AI Policy
A documented AI policy signed by top management. Must state AI objectives, commitment to compliance, and continual improvement. Published internally.
Not StartedPartially MetLargely MetFully Met
5.3 — Roles, responsibilities, and authorities
Defined roles for AI governance: AI owner, risk officer, data protection officer (where applicable), and operational AI teams. Responsibilities are communicated and understood.
Not StartedPartially MetLargely MetFully Met
Clause 6

Planning

Risk and opportunity assessment, AI objectives, and planning for change.

6.1 — Actions to address risks and opportunities
Systematic process for identifying AI-specific risks (bias, safety, security, privacy) and opportunities. Risks are assessed, treated, and monitored.
Not StartedPartially MetLargely MetFully Met
6.1.2 — AI risk assessment
Documented AI risk assessment methodology. Risks are scored by likelihood and impact. Assessment is repeated when AI systems change materially.
Not StartedPartially MetLargely MetFully Met
6.1.3 — AI risk treatment
Risk treatment plan with selected controls, treatment options (avoid, reduce, transfer, accept), and residual risk sign-off by accountable executives.
Not StartedPartially MetLargely MetFully Met
6.2 — AI objectives and planning
Measurable AI objectives aligned to the AI policy. Objectives have owners, timelines, success measures, and are reviewed at management reviews.
Not StartedPartially MetLargely MetFully Met
Clause 7

Support

Resources, competence, awareness, communication, and documented information.

7.1 — Resources
Adequate human, technical, and financial resources allocated to the AI MS. Resource needs are reviewed annually and escalated when insufficient.
Not StartedPartially MetLargely MetFully Met
7.2 — Competence
Competency requirements defined for all AI-related roles. Training records maintained. Gaps identified and addressed through development plans.
Not StartedPartially MetLargely MetFully Met
7.3 — Awareness
All staff with AI responsibilities are aware of the AI policy, their contribution to AI MS effectiveness, and the consequences of non-conformance.
Not StartedPartially MetLargely MetFully Met
7.4 — Communication
Internal and external communication plan for AI governance matters. Stakeholders know how to report AI concerns and how updates will be shared.
Not StartedPartially MetLargely MetFully Met
7.5 — Documented information
AI MS documentation is controlled: created, updated, and retained with appropriate access controls, version management, and retention schedules.
Not StartedPartially MetLargely MetFully Met
Clause 8

Operation

Operational planning, AI system lifecycle management, and supply chain controls.

8.1 — Operational planning and control
AI systems are developed, deployed, and operated according to documented processes. Changes are assessed for AI risk before implementation.
Not StartedPartially MetLargely MetFully Met
8.2 — AI risk assessment (operational)
Risk assessments are conducted for each AI system in scope, updated when systems change, and reviewed annually at minimum.
Not StartedPartially MetLargely MetFully Met
8.3 — AI risk treatment (operational)
Risk treatment plans are implemented and their effectiveness monitored. Residual risks are formally accepted by the accountable owner.
Not StartedPartially MetLargely MetFully Met
8.4 — AI system impact assessment
Impact assessments conducted for high-risk AI systems: assessing effects on individuals, groups, and society. Results documented and reviewed before deployment.
Not StartedPartially MetLargely MetFully Met
8.5 — AI system lifecycle
Documented lifecycle processes covering design, development, testing, deployment, monitoring, and decommissioning. Each phase has defined gates and sign-offs.
Not StartedPartially MetLargely MetFully Met
8.6 — Related organisational controls
Controls addressing data governance, third-party AI use, responsible AI practices, and human oversight are documented and operationalised.
Not StartedPartially MetLargely MetFully Met
Clause 9

Performance Evaluation

Monitoring, measurement, internal audit, and management review.

9.1 — Monitoring, measurement, analysis, and evaluation
KPIs for AI system performance, fairness, reliability, and security are defined, measured regularly, and reported to management.
Not StartedPartially MetLargely MetFully Met
9.2 — Internal audit
Annual internal audit programme covering all in-scope clauses. Auditors are competent and independent. Non-conformities are tracked to closure.
Not StartedPartially MetLargely MetFully Met
9.3 — Management review
Annual management review of the AI MS covering audit results, risks, objectives, resource needs, and stakeholder feedback. Minutes and action items documented.
Not StartedPartially MetLargely MetFully Met
Clause 10

Improvement

Non-conformity management, corrective action, and continual improvement.

10.1 — Non-conformity and corrective action
Process for identifying, documenting, and closing AI MS non-conformities. Root cause analysis performed. Corrective actions tracked and verified.
Not StartedPartially MetLargely MetFully Met
10.2 — Continual improvement
Systematic approach to continually improving the suitability, adequacy, and effectiveness of the AI MS. Improvement initiatives are tracked and measured.
Not StartedPartially MetLargely MetFully Met

Related Resources

ISO 42001 GlossaryPlatform OverviewAI Governance FrameworkAll Templates

Accelerate your ISO 42001 readiness

Aona AI provides the AI governance controls, audit trails, and policy enforcement that ISO 42001 requires — built for enterprise deployment.

Book a Demo →