90 Days Gen AI Risk Trial -Start Now
Book a demo
Free TemplateVendor Risk

Third-Party AI Risk Assessment

A weighted assessment of third-party AI vendors across 7 categories — from data sovereignty and security to explainability, ethics, and contractual protection.

Updated April 2026 · 7 weighted categories · Produces an audit-ready vendor decision

7 categories
weighted risk coverage
Weighted
overall score 0–100%
Approve / Conditional / Reject
vendor decision output
Free
to use and customise

Why Third-Party AI Needs Its Own Assessment

Most AI exposure inside enterprises today is third-party. Every SaaS tool you already use is racing to add AI features — and each one quietly reshapes your risk surface. A standard vendor security questionnaire will not catch whether the vendor trains on your data, whether model outputs can be explained to an auditor, or whether their responsible-AI posture survives scrutiny. This assessment covers what security-only reviews miss.

7 categories
AI-specific risk coverage
Data sovereignty, model transparency, security, compliance, resilience, ethics, and contracts — each weighted to reflect real exposure.
ISO 42001
Annex A.10 evidence
Third-party AI assessments are expected evidence under ISO 42001 Annex A.10 (Third-party relationships) for the AI management system.
SaaS AI
embedded AI reaches further
AI features are now shipping inside tools your teams already use. Each material one needs a fresh assessment — the security review from 2023 does not cover them.
Decision
audit-ready approval output
Produces an overall weighted score, a classification band, and a documented approve / conditional / reject decision — not a qualitative memo.

The Assessment Categories

Seven categories, each weighted to reflect real-world risk. Score each criterion 1–5 with documented evidence, then apply the weights to produce an overall decision.

The highest-weighted category. How the vendor handles, stores, transfers, and — critically — whether they train on your data.

Data Processing Agreement in place
An AI-specific DPA (or AI addendum) has been signed. Standard DPA language rarely covers training-data use, model-memorization risk, or vendor sub-processors.
Data is NOT used for model training
The vendor contractually commits that your data will not be used to train, fine-tune, or evaluate their foundation models. Verified in writing, not sales-slide.
Data residency requirements met
Where the vendor processes data geographically aligns with your residency obligations (EU data stays in EU, etc.) and any cross-border transfer safeguards are documented.
Input/output logging policy documented
The vendor's logging of prompts and completions is documented: what is logged, for how long, who can access it, and whether it can be disabled for sensitive workflows.
Encryption at rest and in transit
Vendor encrypts stored data and all network communication to standard. Key management is either vendor-managed with attestation, or customer-managed (BYOK).
DSAR process for AI-processed data
The vendor has a defined process for data-subject access and deletion requests that covers AI-processed data, including any memorized training data.

How to Run the Assessment

Five steps to produce a scored vendor decision that survives the scrutiny of security, legal, procurement, and — if the day comes — a regulator.

1
Capture vendor and use-case context
Record vendor, product, AI capabilities used, data types processed, business owner, and contract dates. The same vendor can score very differently across use cases — assess per use case, not per vendor.
2
Gather evidence before scoring
Request the vendor's SOC 2 / ISO 27001 / ISO 42001 reports, DPA, model documentation, pen-test summary, and responsible-AI policy. Scoring without evidence produces a wish list, not an assessment.
3
Score each category 1–5
Work through all 7 categories. Score each criterion 1 (critical risk) to 5 (minimal risk) with documented evidence. Unanswered criteria should default to 1 — absence of evidence is not evidence of control.
4
Apply weights and calculate overall score
Apply the default weights (25% data / 20% security / 15% transparency / 15% compliance / 10% resilience / 10% ethics / 5% contracts) to produce an overall percentage. Adjust weights upward for regulated industries.
5
Issue recommendation and capture sign-off
Based on the overall score, recommend Approve (85%+), Conditionally Approve (70–84%), or Not Approved (below 70%). Capture sign-off from assessor, security, legal, business owner, and AI governance committee as required.

Frequently Asked Questions

From Assessment to Continuous Vendor Visibility

Third-party AI risk is not a point-in-time problem. New AI features ship inside existing SaaS tools every week. Aona continuously discovers third-party AI in use across your organisation and flags when an unassessed vendor starts handling sensitive data.

Book a demo