AI Governance
AI GovernanceThe framework of policies, processes, and controls that guide the responsible development, deployment, and use of AI systems within an organization.
Read moreAI Acceptable Use Policy
PolicyA formal organizational policy defining the rules, guidelines, and boundaries for employee use of AI tools and services.
Read moreAI Risk Management
AI GovernanceThe systematic process of identifying, assessing, mitigating, and monitoring risks associated with the development, deployment, and use of AI systems.
Read moreAI Ethics
AI GovernanceThe principles and values guiding the responsible development, deployment, and use of AI systems to ensure fairness, transparency, accountability, and human welfare.
Read moreAI Bias
AI GovernanceSystematic errors in AI systems that produce unfair outcomes, typically reflecting historical prejudices in training data or flawed algorithmic design.
Read moreAI Transparency
AI GovernanceThe principle that AI systems should be understandable, with their decision-making processes, data usage, and limitations clearly communicated to stakeholders.
Read moreAI Security
SecurityThe practice of protecting AI systems, models, and data from threats, vulnerabilities, and attacks throughout the AI lifecycle.
Read moreAI Red Teaming
SecurityThe practice of adversarial testing where security experts attempt to find vulnerabilities, biases, and failure modes in AI systems.
Read moreAI Hallucination
AI ConceptsWhen an AI model generates output that is factually incorrect, fabricated, or nonsensical, presented with the same confidence as accurate information.
Read moreAI Compliance
ComplianceThe practice of ensuring an organization's AI systems and usage adhere to applicable laws, regulations, industry standards, and internal policies.
Read moreAI Observability
AI GovernanceThe ability to monitor, measure, and understand the behavior, performance, and usage of AI systems across an organization in real time.
Read moreAI Supply Chain Security
SecurityThe practice of identifying, assessing, and mitigating security risks across the entire chain of components, services, and vendors that make up an organization's AI ecosystem.
Read moreAI Access Control
SecurityThe policies and mechanisms that determine which users, roles, and systems can access, use, or manage AI tools and the data they process.
Read moreAI Audit Trail
ComplianceA chronological record of all AI-related activities, decisions, and data flows within an organization, maintained for compliance, security, and accountability purposes.
Read moreAI Privacy
SecurityThe protection of personal and sensitive data throughout AI system lifecycles, from training data collection to inference and output generation.
Read moreAI Discovery
AI GovernanceThe process of identifying and cataloging all AI tools, services, and models being used across an organization, including unauthorized or unknown usage.
Read moreAI Incident Response
SecurityThe structured process for detecting, investigating, containing, and recovering from security incidents involving AI systems or AI-related data breaches.
Read moreAI Copilot
AI ConceptsAn AI-powered assistant embedded in software applications that helps users complete tasks by providing suggestions, automating workflows, and generating content.
Read moreAI Model Card
AI GovernanceA standardized document that provides essential information about an AI model including its intended use, performance characteristics, limitations, and ethical considerations.
Read moreAI Data Residency
ComplianceThe requirement that data processed by AI systems remains within specific geographic or jurisdictional boundaries to comply with data sovereignty laws.
Read moreAI Token
AI ConceptsThe basic unit of text that AI language models process, typically representing a word, subword, or character, used to measure input/output length and pricing.
Read moreAI Threat Modeling
SecurityA structured process for identifying, categorizing, and prioritizing potential security threats specific to AI systems and their deployment environments.
Read moreAI Penetration Testing
SecurityThe practice of simulating real-world attacks against AI systems to identify exploitable vulnerabilities in models, APIs, and data pipelines.
Read moreAI Bill of Materials (AI BOM)
AI GovernanceA comprehensive inventory of all components, data sources, models, libraries, and dependencies that make up an AI system, enabling transparency and supply chain security.
Read moreAI Drift Detection
AI GovernanceThe process of monitoring AI models for changes in data patterns, model performance, or output quality over time that may degrade accuracy or introduce new risks.
Read moreAI Sandboxing
SecurityThe practice of isolating AI tools and experiments in controlled environments to test their behavior, security, and compliance before broader organizational deployment.
Read moreAI Ethics Board
AI GovernanceA cross-functional governance body responsible for overseeing the ethical development, deployment, and use of AI systems within an organization.
Read moreAdversarial Machine Learning
SecurityA field of study focused on understanding and defending against attacks that manipulate AI systems through malicious inputs, poisoned data, or model exploitation.
Read moreAI Model Theft
SecurityThe unauthorized extraction, replication, or stealing of proprietary AI models through API queries, insider access, or reverse engineering techniques.
Read moreData Leakage (AI)
SecurityThe unintentional exposure of sensitive, confidential, or regulated data through interactions with AI tools and services.
Read moreData Classification
SecurityThe process of categorizing data based on its sensitivity level to determine appropriate handling, protection, and AI usage rules.
Read moreData Loss Prevention (DLP)
SecuritySecurity technology and processes that detect and prevent the unauthorized transfer of sensitive data, including through AI tools and services.
Read moreDifferential Privacy
SecurityA mathematical framework that provides measurable privacy guarantees by adding controlled noise to data or computations, preventing identification of individuals in datasets.
Read moreData Poisoning
SecurityAn attack on AI systems where adversaries deliberately corrupt training data to manipulate model behavior, introduce backdoors, or degrade performance.
Read moreEU AI Act
ComplianceThe European Union's comprehensive regulatory framework for artificial intelligence, establishing risk-based rules for AI development and deployment.
Read moreExplainable AI (XAI)
AI ConceptsAI systems and techniques designed to make artificial intelligence decisions understandable and interpretable by humans, enabling trust and accountability.
Read moreModel Governance
AI GovernanceThe set of policies, processes, and controls for managing the lifecycle of AI and machine learning models from development through deployment and retirement.
Read moreModel Watermarking
SecurityTechniques for embedding hidden, identifiable markers into AI models or their outputs to prove ownership, detect unauthorized use, or trace content provenance.
Read moreResponsible AI
AI GovernanceAn approach to AI development and deployment that prioritizes ethical principles, societal benefit, and risk mitigation throughout the AI lifecycle.
Read moreRetrieval-Augmented Generation (RAG)
AI ConceptsAn AI architecture that enhances language model outputs by retrieving and incorporating relevant information from external knowledge sources before generating responses.
Read moreBeyond Definitions — Take Action
Understanding AI governance terms is the first step. Aona AI helps you implement comprehensive AI governance with automated Shadow AI discovery, real-time policy enforcement, and continuous compliance monitoring.
