A practical guide for healthcare CIOs and privacy officers navigating the My Health Records Act, Privacy Act, TGA medical device regulations, and the risks of clinical and administrative AI in hospitals, clinics, and aged care facilities.
A registered nurse on a busy hospital ward needs to complete a discharge summary before end of shift. Short on time, she opens ChatGPT and types:
What just happened: The patient's full name, age, diagnosis (acute myocardial infarction), medications, treating physician, and personal circumstances were transmitted to OpenAI's servers — offshore, without patient consent, in breach of APP 6 and APP 8 of the Privacy Act.
AI in clinical settings introduces patient safety, regulatory, and liability risks that administrative AI governance frameworks alone cannot address.
AI tools that assist in radiology, pathology, and clinical decision-making are classified as Software as a Medical Device (SaMD) under the TGA. Deploying or integrating these without appropriate TGA registration, clinical validation, and post-market surveillance exposes healthcare organisations to significant regulatory and patient safety risk.
TGA non-compliance + patient harm liability if AI diagnostic errors are not governed.
Generative AI used to suggest treatment protocols, medication dosages, or care pathways without appropriate clinical oversight creates accountability gaps. When a clinician acts on an AI recommendation that results in patient harm, questions of responsibility — and insurance coverage — become complex without clear governance documentation.
Unvalidated AI treatment recommendations create uninsurable clinical liability gaps.
AI agents increasingly handle appointment scheduling, patient triage, discharge planning, and referral coordination. When these systems make autonomous decisions — particularly in aged care and mental health settings — the Privacy Act's incoming automated decision-making (ADM) requirements and duty-of-care obligations require documented human oversight processes.
Agentic AI in care coordination triggers Privacy Act ADM obligations and duty-of-care exposure.
The greatest volume of AI-related health privacy breaches comes not from clinical AI — but from staff using general-purpose AI tools in their daily workflows.
A nurse in a busy ward uses ChatGPT to draft a discharge summary for a patient. To get a useful output, she pastes in the patient's name, date of birth, diagnosis, medications, and treating physician details. ChatGPT processes the prompt — sending identifiable health information to OpenAI's servers offshore. Under the My Health Records Act and the Privacy Act, this constitutes a disclosure of sensitive health information to a third party without patient consent. The hospital may face a mandatory Notifiable Data Breach report to the OAIC.
Ambient AI scribes that listen to patient-clinician consultations and generate clinical notes are increasingly used to reduce administrative burden. However, without appropriate patient consent processes, data storage agreements, and My Health Records Act compliance, these tools may breach health privacy obligations. Many commercially available scribes route audio data through servers in the United States or United Kingdom.
Medical staff searching for treatment evidence, drug interactions, or clinical guidelines using general-purpose AI tools may inadvertently include patient-specific context in their queries — e.g., 'my patient has condition X and is on medication Y, what dose adjustment is appropriate?' This patient-identifiable query is processed by an external AI service not covered by any BAA or data processing agreement.
Administrative staff at hospitals and clinics are adopting AI tools to process insurance claims, manage Medicare billing, and draft correspondence. These tools often require patient health and financial data as input. Without data classification controls, staff may use unapproved AI tools that expose Medicare numbers, private health insurance details, and sensitive diagnosis codes.
Healthcare AI in Australia sits at the intersection of multiple regulatory regimes. Understanding each obligation is the first step to effective governance.
The My Health Records Act governs access to, and use of, health information in the My Health Record system. Healthcare providers accessing or using data from My Health Records must ensure AI tools do not process this data in ways that breach the Act's strict use limitation and disclosure provisions. Sharing My Health Record data with an AI tool — even internally — without appropriate authorisation may constitute an offence.
Health information is 'sensitive information' under the Privacy Act, attracting the highest level of protection under the Australian Privacy Principles (APPs). Healthcare AI must comply with APP 3 (collection only as necessary), APP 6 (use only for primary purpose), APP 8 (offshore disclosure restrictions), and APP 11 (security safeguards). Amendments expected in late 2026 add automated decision-making transparency requirements directly relevant to clinical AI.
The Therapeutic Goods Administration (TGA) regulates AI/ML software used for clinical purposes as Software as a Medical Device (SaMD). AI tools that analyse medical images, support diagnosis, recommend treatment, or predict clinical outcomes may require TGA registration, pre-market conformity assessment, and ongoing post-market surveillance. The TGA's AI/ML SaMD guidance aligns with international frameworks including the FDA's AI/ML action plan.
Aged care providers using AI in resident care — including falls prediction, care planning AI, and medication management tools — must ensure these comply with the Aged Care Quality Standards. The Royal Commission into Aged Care highlighted risks of dehumanising care; regulators are increasingly scrutinising AI use in aged care settings. Human oversight and resident consent processes are critical governance requirements.
Learn more about AI governance and compliance frameworks
A practical five-step framework for hospitals, clinics, and aged care providers building AI governance programs that satisfy the OAIC, TGA, and accreditation bodies.
Map every AI tool in use across your health service — clinical, administrative, and research. Classify each by risk level: clinical AI (TGA obligations), tools processing health records (Privacy Act/My Health Records Act), and general productivity AI used by staff. Shadow AI discovery is essential — most hospitals significantly underestimate AI tool adoption.
Aona discovers all AI tools in use within 5 minutes of deploymentImplement controls that prevent patient-identifiable health information from entering unapproved AI tools. This includes technical controls (DLP policies that detect health identifiers, Medicare numbers, diagnosis codes), staff policies, and approved-tool lists. Prevention is far less costly than NDB notification and OAIC investigation.
Block patient data from reaching unapproved AI — automaticallyEstablish a governance committee including CMO/CNO, privacy officer, IT security, legal counsel, and clinical leads. This committee approves clinical AI tools, reviews TGA compliance, oversees incident response, and provides the board-level accountability that regulators increasingly expect from health services.
Documented governance satisfies OAIC, TGA, and accreditation body requirementsTrain all clinical and administrative staff on what health information can and cannot be shared with AI tools. Establish patient consent processes for any AI use that affects care decisions. Document consent in clinical records. Training logs and policy attestations are critical evidence in any regulatory investigation.
Training records + policy attestations reduce regulatory risk significantlyDocument a clear AI-specific incident response procedure: how to identify an AI-related health data breach, when NDB notification to the OAIC and affected patients is required, and how to preserve evidence. Under the Privacy Act, notification must generally occur within 30 days of becoming aware of an eligible data breach.
30-day NDB clock starts when you become aware — have a plan readyPurpose-built AI governance that addresses the specific privacy, clinical safety, and regulatory challenges of the Australian healthcare sector.
Detect every AI tool in use across your hospital, clinic, or aged care facility — including ChatGPT, AI scribes, and consumer AI used by clinical and admin staff without IT approval. Know your full AI exposure within minutes.
Enforce policies that automatically detect and block patient-identifiable health information — names, Medicare numbers, diagnoses, medications — from being submitted to unapproved AI tools. Prevent breaches before they happen.
Maintain a complete, tamper-evident audit trail of all AI interactions involving health data. Provide evidence to the OAIC in an NDB investigation, satisfy accreditation body requirements, and demonstrate due diligence to your board.
Deploy AI governance policies across every department — ED, ICU, outpatient clinics, administration — without disrupting clinical workflows. Approved tools remain accessible; unapproved tools are blocked with clear staff guidance.
Aona gives Australian healthcare organisations full AI visibility, patient data protection controls, and audit-ready compliance documentation — deployed in under 5 minutes, no IT project required.
Questions? Read our AI Governance guide · AI Compliance framework · AI Security controls