90 Days Gen AI Risk Trial -Start Now
Book a demo
AI Governance · Healthcare Australia · 2026

AI Governance for Australian Healthcare

A practical guide for healthcare CIOs and privacy officers navigating the My Health Records Act, Privacy Act, TGA medical device regulations, and the risks of clinical and administrative AI in hospitals, clinics, and aged care facilities.

My Health Records Act
health data obligations
TGA SaMD
clinical AI regulation
Privacy Act APPs
sensitive health info
NDB Scheme
30-day breach notification
Real-World Risk Scenario

The Nurse & ChatGPT: A Privacy Breach Waiting to Happen

A registered nurse on a busy hospital ward needs to complete a discharge summary before end of shift. Short on time, she opens ChatGPT and types:

Prompt: "Write a discharge summary for John Smith, 67, admitted 14 March with acute MI. Medications: aspirin 100mg, atorvastatin 40mg, metoprolol 25mg BD. Treating cardiologist Dr. A. Nguyen. Patient to follow up in 6 weeks. Home to wife, independent ADLs."

What just happened: The patient's full name, age, diagnosis (acute myocardial infarction), medications, treating physician, and personal circumstances were transmitted to OpenAI's servers — offshore, without patient consent, in breach of APP 6 and APP 8 of the Privacy Act.

Privacy Act Breach
Sensitive health info disclosed to third party without consent — likely APP 6 & APP 8 breach
NDB Obligation
Hospital must assess whether NDB notification to OAIC and patient is required within 30 days
No Policy = No Defence
Without an AI governance policy, the organisation has no documented standard of care to rely on

Clinical AI Risks

AI in clinical settings introduces patient safety, regulatory, and liability risks that administrative AI governance frameworks alone cannot address.

Diagnostic AI & Clinical Decision Support

AI tools that assist in radiology, pathology, and clinical decision-making are classified as Software as a Medical Device (SaMD) under the TGA. Deploying or integrating these without appropriate TGA registration, clinical validation, and post-market surveillance exposes healthcare organisations to significant regulatory and patient safety risk.

TGA non-compliance + patient harm liability if AI diagnostic errors are not governed.

AI Treatment Recommendations

Generative AI used to suggest treatment protocols, medication dosages, or care pathways without appropriate clinical oversight creates accountability gaps. When a clinician acts on an AI recommendation that results in patient harm, questions of responsibility — and insurance coverage — become complex without clear governance documentation.

Unvalidated AI treatment recommendations create uninsurable clinical liability gaps.

Autonomous AI in Care Coordination

AI agents increasingly handle appointment scheduling, patient triage, discharge planning, and referral coordination. When these systems make autonomous decisions — particularly in aged care and mental health settings — the Privacy Act's incoming automated decision-making (ADM) requirements and duty-of-care obligations require documented human oversight processes.

Agentic AI in care coordination triggers Privacy Act ADM obligations and duty-of-care exposure.

Administrative AI Risks

The greatest volume of AI-related health privacy breaches comes not from clinical AI — but from staff using general-purpose AI tools in their daily workflows.

The nurse + ChatGPT scenarioHigh — mandatory NDB notification likely required

ChatGPT for Patient Notes & Discharge Summaries

A nurse in a busy ward uses ChatGPT to draft a discharge summary for a patient. To get a useful output, she pastes in the patient's name, date of birth, diagnosis, medications, and treating physician details. ChatGPT processes the prompt — sending identifiable health information to OpenAI's servers offshore. Under the My Health Records Act and the Privacy Act, this constitutes a disclosure of sensitive health information to a third party without patient consent. The hospital may face a mandatory Notifiable Data Breach report to the OAIC.

Ambient AI recording consultationsHigh — offshore data transfer without consent breaches APP 8

AI Medical Scribes & Clinical Documentation

Ambient AI scribes that listen to patient-clinician consultations and generate clinical notes are increasingly used to reduce administrative burden. However, without appropriate patient consent processes, data storage agreements, and My Health Records Act compliance, these tools may breach health privacy obligations. Many commercially available scribes route audio data through servers in the United States or United Kingdom.

Clinical research via ChatGPT/ClaudeMedium — data minimisation obligations under APP 3 may be breached

Medical Staff Using Consumer AI for Research

Medical staff searching for treatment evidence, drug interactions, or clinical guidelines using general-purpose AI tools may inadvertently include patient-specific context in their queries — e.g., 'my patient has condition X and is on medication Y, what dose adjustment is appropriate?' This patient-identifiable query is processed by an external AI service not covered by any BAA or data processing agreement.

Patient billing and insurance claimsMedium — Medicare numbers are sensitive data under expanded Privacy Act

Administration Staff & Billing AI Tools

Administrative staff at hospitals and clinics are adopting AI tools to process insurance claims, manage Medicare billing, and draft correspondence. These tools often require patient health and financial data as input. Without data classification controls, staff may use unapproved AI tools that expose Medicare numbers, private health insurance details, and sensitive diagnosis codes.

Australian Healthcare AI Regulatory Framework

Healthcare AI in Australia sits at the intersection of multiple regulatory regimes. Understanding each obligation is the first step to effective governance.

My Health Records Act 2012All Healthcare Providers

Health Data Sovereignty

The My Health Records Act governs access to, and use of, health information in the My Health Record system. Healthcare providers accessing or using data from My Health Records must ensure AI tools do not process this data in ways that breach the Act's strict use limitation and disclosure provisions. Sharing My Health Record data with an AI tool — even internally — without appropriate authorisation may constitute an offence.

Privacy Act 1988 — Health RecordsAll Organisations

Sensitive Information & APP Compliance

Health information is 'sensitive information' under the Privacy Act, attracting the highest level of protection under the Australian Privacy Principles (APPs). Healthcare AI must comply with APP 3 (collection only as necessary), APP 6 (use only for primary purpose), APP 8 (offshore disclosure restrictions), and APP 11 (security safeguards). Amendments expected in late 2026 add automated decision-making transparency requirements directly relevant to clinical AI.

TGA — Software as a Medical DeviceClinical AI Tools

AI Medical Device Regulation

The Therapeutic Goods Administration (TGA) regulates AI/ML software used for clinical purposes as Software as a Medical Device (SaMD). AI tools that analyse medical images, support diagnosis, recommend treatment, or predict clinical outcomes may require TGA registration, pre-market conformity assessment, and ongoing post-market surveillance. The TGA's AI/ML SaMD guidance aligns with international frameworks including the FDA's AI/ML action plan.

Aged Care Act 1997Aged Care Providers

Aged Care AI Obligations

Aged care providers using AI in resident care — including falls prediction, care planning AI, and medication management tools — must ensure these comply with the Aged Care Quality Standards. The Royal Commission into Aged Care highlighted risks of dehumanising care; regulators are increasingly scrutinising AI use in aged care settings. Human oversight and resident consent processes are critical governance requirements.

Learn more about AI governance and compliance frameworks

Healthcare AI Governance Framework

A practical five-step framework for hospitals, clinics, and aged care providers building AI governance programs that satisfy the OAIC, TGA, and accreditation bodies.

01

AI Tool Inventory & Risk Classification

Map every AI tool in use across your health service — clinical, administrative, and research. Classify each by risk level: clinical AI (TGA obligations), tools processing health records (Privacy Act/My Health Records Act), and general productivity AI used by staff. Shadow AI discovery is essential — most hospitals significantly underestimate AI tool adoption.

Aona discovers all AI tools in use within 5 minutes of deployment
02

Patient Data Protection Policies

Implement controls that prevent patient-identifiable health information from entering unapproved AI tools. This includes technical controls (DLP policies that detect health identifiers, Medicare numbers, diagnosis codes), staff policies, and approved-tool lists. Prevention is far less costly than NDB notification and OAIC investigation.

Block patient data from reaching unapproved AI — automatically
03

Clinical AI Governance Committee

Establish a governance committee including CMO/CNO, privacy officer, IT security, legal counsel, and clinical leads. This committee approves clinical AI tools, reviews TGA compliance, oversees incident response, and provides the board-level accountability that regulators increasingly expect from health services.

Documented governance satisfies OAIC, TGA, and accreditation body requirements
04

Staff Training & Consent Frameworks

Train all clinical and administrative staff on what health information can and cannot be shared with AI tools. Establish patient consent processes for any AI use that affects care decisions. Document consent in clinical records. Training logs and policy attestations are critical evidence in any regulatory investigation.

Training records + policy attestations reduce regulatory risk significantly
05

Incident Response & NDB Procedures

Document a clear AI-specific incident response procedure: how to identify an AI-related health data breach, when NDB notification to the OAIC and affected patients is required, and how to preserve evidence. Under the Privacy Act, notification must generally occur within 30 days of becoming aware of an eligible data breach.

30-day NDB clock starts when you become aware — have a plan ready

How Aona Protects Australian Healthcare Organisations

Purpose-built AI governance that addresses the specific privacy, clinical safety, and regulatory challenges of the Australian healthcare sector.

🔍

Shadow AI Discovery in Healthcare Settings

Detect every AI tool in use across your hospital, clinic, or aged care facility — including ChatGPT, AI scribes, and consumer AI used by clinical and admin staff without IT approval. Know your full AI exposure within minutes.

🛡️

Patient Data Protection Controls

Enforce policies that automatically detect and block patient-identifiable health information — names, Medicare numbers, diagnoses, medications — from being submitted to unapproved AI tools. Prevent breaches before they happen.

📋

Audit Trails for OAIC & Accreditation

Maintain a complete, tamper-evident audit trail of all AI interactions involving health data. Provide evidence to the OAIC in an NDB investigation, satisfy accreditation body requirements, and demonstrate due diligence to your board.

⚙️

Policy Enforcement Across Clinical Workflows

Deploy AI governance policies across every department — ED, ICU, outpatient clinics, administration — without disrupting clinical workflows. Approved tools remain accessible; unapproved tools are blocked with clear staff guidance.

Frequently Asked Questions

Protect Patient Privacy Before the Next AI Breach

Aona gives Australian healthcare organisations full AI visibility, patient data protection controls, and audit-ready compliance documentation — deployed in under 5 minutes, no IT project required.

Questions? Read our AI Governance guide · AI Compliance framework · AI Security controls