A weighted assessment of third-party AI vendors across 7 categories — from data sovereignty and security to explainability, ethics, and contractual protection.
Updated April 2026 · 7 weighted categories · Produces an audit-ready vendor decision
Most AI exposure inside enterprises today is third-party. Every SaaS tool you already use is racing to add AI features — and each one quietly reshapes your risk surface. A standard vendor security questionnaire will not catch whether the vendor trains on your data, whether model outputs can be explained to an auditor, or whether their responsible-AI posture survives scrutiny. This assessment covers what security-only reviews miss.
Seven categories, each weighted to reflect real-world risk. Score each criterion 1–5 with documented evidence, then apply the weights to produce an overall decision.
The highest-weighted category. How the vendor handles, stores, transfers, and — critically — whether they train on your data.
Five steps to produce a scored vendor decision that survives the scrutiny of security, legal, procurement, and — if the day comes — a regulator.
Third-party AI risk is not a point-in-time problem. New AI features ship inside existing SaaS tools every week. Aona continuously discovers third-party AI in use across your organisation and flags when an unassessed vendor starts handling sensitive data.
Book a demo