90 Days Gen AI Risk Trial -Start Now
Book a demo
Free TemplateAI Risk Management

AI Risk Assessment Checklist

A structured checklist for evaluating your AI risk posture across 7 critical domains. Score your compliance, identify gaps, and prioritise remediation with built-in risk scoring.

Updated March 2026 · 7 risk domains · NIST AI RMF, ISO 42001, EU AI Act aligned

7 domains
complete risk coverage
37 items
checklist checkpoints
3 frameworks
NIST, ISO 42001, EU AI Act
Free
to use and customise

Why You Need an AI Risk Assessment

Most organisations have deployed AI tools and models without a structured risk assessment process. As AI usage scales and regulators tighten oversight, the gap between perceived and actual AI risk exposure is becoming a material business issue — not just a compliance checkbox.

77%
of organisations report AI-related security incidents
Data exposure through AI tools is now among the top security concerns for enterprise CISOs, overtaking traditional vectors in growth rate.
€35M
Maximum EU AI Act fine for prohibited practices
Deploying high-risk AI without proper risk assessment and conformity procedures can result in significant regulatory penalties.
68%
of AI incidents involve shadow AI tools
Employees using unapproved AI tools outside IT oversight are the most common source of AI-related data incidents.
6 mo.
Typical gap between AI deployment and risk assessment
Most organisations deploy AI systems before completing a formal risk assessment, creating a dangerous window of unmanaged exposure.

The Risk Assessment Checklist

Work through each domain systematically. Check off items as fully met, note partial gaps, and flag missing controls for remediation.

Assess how your organisation handles personal and sensitive data in the context of AI systems — both for training and inference.

Data inventory completed
All personal data used to train, fine-tune, or prompt AI models is documented in your data inventory/register.
Lawful basis established
A lawful basis under GDPR (or equivalent) has been identified for each AI processing activity involving personal data.
Data minimisation applied
AI systems only receive the minimum personal data necessary for their function; excess data is masked or excluded.
Data subject rights process exists
A defined process exists for responding to DSARs that may involve AI-generated outputs or AI-processed data.
Cross-border transfer controls
Where AI services process data outside the EEA, appropriate transfer mechanisms (SCCs, adequacy decisions) are in place.
Privacy impact assessment conducted
A DPIA has been completed for high-risk AI processing activities, with documented outcomes and mitigations.

How to Conduct This Assessment

Follow these five steps to run a structured AI risk assessment that produces actionable outputs, not just a checklist artefact.

1
Assemble your cross-functional assessment team
Include IT security, legal/compliance, data privacy, business process owners, and an AI/ML representative. AI risk spans multiple disciplines and a siloed assessment will create blind spots.
2
Inventory all AI systems in scope
List every AI tool, model, and service in production or active piloting — including shadow AI tools used without IT approval. Use network traffic analysis, browser extension data, or employee surveys to surface unapproved tools.
3
Score each domain systematically
Work through each of the 7 domains for every AI system in scope. Mark items as fully met (green), partially met (amber), or not met (red). Document evidence for each assessment decision.
4
Prioritise gaps by risk severity
Rank identified gaps by the combination of likelihood and impact. Items that could cause immediate regulatory violation, data breach, or significant business disruption should be treated as critical and assigned immediate remediation owners.
5
Build a remediation roadmap and re-assessment schedule
Create a tracked remediation plan with named owners, deadlines, and success criteria. Schedule the next full assessment in 6–12 months, and define triggers that will initiate an interim re-assessment (new system deployment, significant model update, regulatory change).

Frequently Asked Questions

Turn Your Risk Assessment into Continuous Monitoring

A point-in-time risk assessment is just the starting point. Aona provides continuous AI risk monitoring — automatically discovering shadow AI, detecting sensitive data in prompts, and maintaining a live risk register that stays current as your AI landscape evolves.

Book a Demo