The State of AI in Insurance
The Australian insurance industry is undergoing a rapid AI-driven transformation. From underwriting automation that assesses risk in seconds rather than days, to claims processing AI that triages and settles straightforward claims without human intervention, AI is reshaping every stage of the insurance value chain.
Major Australian insurers — IAG (NRMA, CGU), Suncorp (AAMI, GIO), QBE, Allianz, and public sector insurer icare — are deploying AI across underwriting risk assessment and pricing, using machine learning models to evaluate risk factors and set premiums; claims triage and processing, with AI categorising claims by complexity and automating straightforward settlements; fraud detection, using pattern recognition across claims data to identify suspicious patterns; customer service automation, with AI chatbots handling policy enquiries, claims lodgement, and first notification of loss; actuarial modelling, where AI augments traditional actuarial methods with larger datasets and more variables; broker and intermediary tools, with AI-powered quoting and risk assessment in distribution channels; and natural catastrophe modelling, using AI to predict and price weather-related risks in Australia's increasingly volatile climate.
The commercial pressure to adopt AI in insurance is intense. Claims processing costs represent 60-70% of premiums, and AI-driven efficiency gains in claims handling directly improve combined ratios. Insurers that lag in AI adoption face competitive disadvantage in pricing accuracy, claims efficiency, and customer experience.
However, insurance AI carries exceptional governance risk. Insurance is one of the few industries where AI algorithms directly determine whether individuals receive coverage, what they pay, and whether their claims are honoured. These are life-impacting decisions. An AI underwriting model that discriminates against individuals based on protected attributes — disability, gender, age, race — doesn't just create legal liability; it causes real harm to real people. The EU AI Act explicitly classifies insurance pricing and underwriting AI as high-risk, and Australian regulators including APRA and ASIC are signalling increased scrutiny of algorithmic decision-making in insurance.
Key AI Security Risks in Insurance
Insurance organisations face AI security risks that span data privacy, algorithmic fairness, regulatory compliance, and operational integrity.
Algorithmic Bias in Underwriting and Pricing: The most consequential AI risk in insurance is discriminatory outcomes in underwriting and pricing decisions. Machine learning models trained on historical data can perpetuate and amplify existing biases. Proxy discrimination — where AI uses apparently neutral variables (postcode, occupation, vehicle type) that correlate with protected attributes (race, socioeconomic status, disability) — is particularly insidious because the discrimination is not explicit in the model's features. Under the Disability Discrimination Act 1992 and Sex Discrimination Act 1984, insurers have limited exemptions allowing actuarially justified discrimination, but these exemptions require genuine actuarial or statistical data supporting the differentiation. AI models that discriminate without transparent actuarial justification risk enforcement action from the Australian Human Rights Commission and ASIC.
Claims Processing AI Errors: AI systems that triage, assess, or settle claims can make errors with significant financial and human impact. An AI that incorrectly denies a legitimate claim, undervalues a settlement, or fails to identify a complex claim requiring specialist assessment causes direct harm to policyholders. Under the Insurance Contracts Act 1984 and ASIC RG 271, insurers must handle claims fairly and have effective internal dispute resolution — AI-driven claims decisions must meet these standards. The 2022 icare workers' compensation underpayment scandal highlighted the consequences of automated systems making errors in benefits calculations.
Fraud Detection False Positives and Discrimination: AI fraud detection models that disproportionately flag claims from particular demographic groups, geographic areas, or cultural backgrounds create discrimination risk. Legitimate claimants subjected to enhanced scrutiny based on AI profiling experience delays, intrusive investigations, and reputational harm. Insurers must test fraud detection AI for disparate impact and ensure false positive rates are consistent across demographic groups.
Shadow AI in Claims and Broker Networks: Claims assessors, loss adjusters, and insurance brokers frequently use AI tools to draft correspondence, analyse claim documentation, and prepare reports. The insurance distribution model — with brokers, underwriting agencies, and claims management providers operating as intermediaries — creates extended Shadow AI exposure across the value chain. Client personal information, medical reports, financial records, and claims details processed through unapproved AI tools violate Privacy Act obligations and potentially compromise legal professional privilege in disputed claims.
Customer Data Sensitivity: Insurance applications and claims files contain some of the most sensitive personal information in any industry — medical histories, financial records, criminal records, disability status, mental health information, and domestic violence disclosures. The Privacy Act's Australian Privacy Principles, particularly APP 6 (use and disclosure) and the enhanced protections for sensitive information under APP 3, impose strict obligations on how this data can be processed by AI.
Model Risk and Actuarial Integrity: APRA-regulated insurers must ensure AI models used in capital and reserving calculations, pricing, and risk management meet prudential standards. AI models that lack transparency, explainability, or independent validation create model risk that can affect prudential capital adequacy and regulatory compliance.
APRA Compliance and Regulatory Framework for Insurance AI
Australian insurers operate under a comprehensive prudential and conduct regulatory framework that directly affects AI governance.
APRA CPS 234 (Information Security): CPS 234 requires APRA-regulated entities to maintain information security commensurate with the threats to their information assets. For AI systems, this means classifying AI tools as information assets within the CPS 234 framework, assessing threats to AI systems including data poisoning, model manipulation, and unauthorised access, implementing security controls proportionate to AI-related risks, testing the effectiveness of AI security controls through regular assessment, notifying APRA of material AI-related information security incidents, and ensuring the board is informed of material AI security issues.
APRA CPG 234 (Information Security Guidelines): CPG 234 provides guidance on implementing CPS 234 and addresses technology risk management. For AI, the guidelines indicate that entities should assess AI vendor and third-party risks, implement security controls for AI data flows, maintain AI system security monitoring, include AI in business continuity and disaster recovery planning, and ensure AI-related incident response capabilities.
APRA CPS 230 (Operational Risk Management): CPS 230, effective from July 2025, strengthens operational risk management requirements. AI systems that support critical operations must be included in operational resilience planning. This includes identifying AI dependencies in critical business processes, establishing tolerance levels for AI system disruptions, testing AI resilience through scenario analysis, managing AI vendor concentration risk, and maintaining viable alternatives to AI-dependent processes.
Insurance Contracts Act and AI Decision-Making: The Insurance Contracts Act 1984 governs the insurance contract relationship and imposes specific obligations relevant to AI. Section 13 (duty of utmost good faith) requires insurers to act with good faith in all dealings — AI-driven decisions must meet this standard. Section 14 (duty of disclosure) is affected by AI that gathers or infers information beyond what the policyholder has disclosed. Sections 54 and 56 affect claims handling, and AI claims decisions must comply with these provisions.
ASIC RG 271 and AI in Dispute Resolution: ASIC's Regulatory Guide 271 sets standards for internal dispute resolution. When AI-driven claims decisions are disputed, insurers must provide substantive responses that explain the decision-making basis, identify the information relied upon (including AI model outputs), and demonstrate that the decision was fair and considered all relevant information. This effectively requires explainability for AI claims decisions.
EU AI Act High-Risk Classification: The EU AI Act classifies AI used for insurance pricing, underwriting, and claims assessment as high-risk. Australian insurers with EU exposure (through Lloyd's syndicates, global programs, or EU subsidiaries) must comply with conformity assessments, human oversight requirements, transparency obligations, accuracy and robustness testing, and risk management systems for their insurance AI.
Building an AI Governance Framework for Insurance Organisations
Insurance organisations need governance frameworks that address the unique intersection of prudential regulation, consumer protection, and algorithmic fairness.
Insurance AI Governance Committee: Establish a governance body reflecting the breadth of insurance AI risk. Include the Chief Risk Officer and risk management, Chief Actuary and actuarial team, Chief Information Security Officer, Chief Claims Officer, Head of Underwriting, Compliance and regulatory affairs, and Legal counsel. This committee should have authority to approve AI models for production use, mandate bias testing and fairness assessments, require human oversight for high-impact AI decisions, and escalate material AI risks to the board risk committee.
AI Model Risk Management: Implement a model risk management framework aligned with APRA expectations. Model development standards require documentation of training data, feature selection rationale, model architecture, and performance metrics. Independent model validation by qualified reviewers (actuaries, data scientists) not involved in model development. Model monitoring with automated drift detection, performance degradation alerts, and regular revalidation schedules. Model inventory maintaining a comprehensive register of all AI models including purpose, owner, validation status, risk rating, and data dependencies. Model change management with version control, testing requirements, and approval workflows for model updates.
Algorithmic Fairness Framework: Develop a structured approach to identifying and mitigating AI bias in insurance decisions. Pre-deployment testing with bias audits across protected attributes (age, gender, disability, ethnicity, postcode as proxy) before any model enters production. Ongoing monitoring with regular statistical analysis of model outcomes by demographic group. Actuarial justification documentation, where any differential treatment based on protected attributes requires actuarial data supporting the differentiation, per anti-discrimination legislation exemptions. Explainability requirements ensuring all AI decisions affecting policyholders can be explained in terms the policyholder can understand, particularly for adverse decisions. Remediation procedures with defined processes for addressing identified bias, including model retraining, feature removal, and policyholder notification.
Claims AI Governance: AI in claims processing requires specific governance given the direct impact on policyholders. Implement human review requirements for all AI-driven claim denials and significant settlement reductions. Establish accuracy thresholds — if claims AI accuracy falls below defined levels, escalate to human assessment. Monitor claims AI outcomes for patterns suggesting systemic errors or bias. Ensure AI claims decisions comply with Insurance Contracts Act obligations, including utmost good faith. Maintain audit trails linking AI assessment to supporting evidence for dispute resolution.
Broker and Distribution Channel AI Governance: Insurance distribution through brokers, underwriting agencies, and aggregators creates extended AI governance requirements. Establish AI acceptable use standards for authorised representatives and distribution partners. Include AI governance requirements in binding authority agreements and distribution contracts. Assess Shadow AI risks across the distribution chain. Require broker and intermediary compliance with insurer AI data handling policies. Monitor AI tool usage across distribution channels where feasible.
Shadow AI Prevention in Insurance
Shadow AI in insurance is widespread across claims, underwriting, and distribution functions, driven by the volume of documentation and correspondence these roles handle.
Common Shadow AI Scenarios in Insurance: Claims assessors pasting medical reports, police reports, and claimant statements into AI for summarisation and assessment. Loss adjusters using AI to analyse building damage photographs and generate repair estimates. Underwriters feeding application data and risk information into AI tools for quoting outside approved systems. Brokers using AI to draft advice documents, Statements of Advice, and client correspondence containing personal and financial information. Actuaries using AI to explore modelling approaches with sensitive claims and pricing datasets. Customer service staff pasting policy details and claims information into AI chatbots to draft responses.
The Broker and Intermediary Challenge: The insurance distribution model creates unique Shadow AI governance challenges. Brokers and authorised representatives operate as separate businesses with their own IT environments, but process insurer and policyholder data. Controlling AI usage across hundreds of broker practices — many of which are small businesses with limited IT governance — requires a combination of contractual obligations, technology controls where data is exchanged electronically, and education programs targeting broker principals and compliance officers.
Technical Controls for Insurance: Deploy DLP rules configured for insurance-specific data patterns — policy numbers, claim references, medical terminology, financial figures. Implement network monitoring for AI service traffic across claims, underwriting, and corporate networks. Use endpoint management to control AI application installation on corporate devices. Monitor API integrations between insurance platforms (policy administration, claims management) and external AI services. Deploy email gateway scanning for AI-processed content containing policyholder information.
Providing Approved Alternatives: Offer governed AI tools for common insurance workflows. Deploy an approved claims summarisation AI with medical information handling controls and data residency in Australia. Provide an approved underwriting analysis AI integrated with existing risk assessment systems. Supply approved correspondence drafting AI with policyholder data protection and tone-appropriate output. Create approved prompt libraries for common claims, underwriting, and customer service tasks that minimise sensitive data input.
Training and Awareness: Insurance-specific AI training should emphasise the impact of AI decisions on policyholders. Train claims staff on how AI misuse could harm claimants and trigger ASIC enforcement. Educate underwriters on algorithmic bias risks and anti-discrimination obligations. Brief brokers on Privacy Act obligations when using AI with client data. Conduct scenario-based exercises demonstrating how Shadow AI could lead to claims leakage, privacy breaches, or discriminatory outcomes.