While the EU passed its landmark AI Act and the US debates federal AI legislation, Australia is taking a quieter but equally significant path: embedding AI obligations directly into the Privacy Act 1988. The result is a compliance landscape that looks familiar on the surface but contains new AI-specific tripwires that most enterprises are not yet prepared for.
The key deadline: December 10, 2026. From that date, Australian organisations must disclose when automated systems — including AI — make decisions that significantly affect individuals. This is not theoretical future legislation. The Privacy and Other Legislation Amendment Act 2024 (POLA Act) is already law.
What Is Actually Changing in December 2026
The new Automated Decision-Making (ADM) transparency obligation requires organisations to update their privacy policies to clearly disclose:
- When and how automated processes (including AI) are used to make decisions with significant effects on individuals
- What types of personal information are used by those AI systems
- The kinds of decisions those systems facilitate or influence
The definition of ADM is intentionally broad — it covers AI-powered hiring tools, credit scoring models, fraud detection, customer service chatbots that route complaints, and any internal system that uses personal data to produce outcomes affecting employees or customers.
The Office of the Australian Information Commissioner (OAIC) is already conducting compliance sweeps in 2026, focusing on privacy policies. Non-compliance carries real financial penalties under the POLA Act — up to AU million for serious or repeated breaches.
Why Shadow AI Makes This Exponentially Harder
Here is the compliance problem most enterprises have not yet confronted: you cannot disclose how AI systems use personal data if you do not know which AI systems are processing that data in the first place.
Shadow AI — the use of unauthorised or unmonitored AI tools by employees — is endemic in Australian organisations. Salesforce research found 55% of GenAI adopters use unapproved tools at work. When employees paste customer data, HR records, or financial information into ChatGPT, an AI transcription tool, or a productivity AI without IT approval, that data is being processed by an AI system that almost certainly is not disclosed in your privacy policy.
Under the new ADM transparency rules, this creates direct regulatory exposure. You are processing personal data through AI systems you cannot document.
The 4 Steps Australian Enterprises Must Take Before December 2026
1. Conduct an AI System Inventory
Before you can disclose AI usage, you need a complete inventory of every AI system processing personal data in your organisation — including approved tools, Shadow AI, and AI features embedded in SaaS platforms (the AI now built into Salesforce, Microsoft 365, Workday, etc.). This is not a one-time exercise; it requires continuous discovery.
2. Assess Which Systems Meet the ADM Threshold
Not every AI system triggers the disclosure obligation — only those making decisions with "legal or similarly significant effects" on individuals. Work with legal counsel to apply this threshold to your AI inventory. High-risk categories include: automated performance management, AI-assisted hiring, automated credit decisions, AI-powered fraud flags that restrict customer accounts.
3. Update Your Privacy Policy
Privacy policies need plain-English disclosure of: (a) which AI systems are used, (b) what personal data they process, (c) what types of decisions they influence, and (d) any human oversight mechanisms. Generic "we may use automated systems" language will not satisfy the OAIC's requirements.
4. Implement Ongoing AI Governance
Compliance is not a one-time event. New AI tools will be adopted (and Shadow AI will emerge). Your governance framework needs continuous AI discovery, usage monitoring, and a mechanism to update privacy disclosures as your AI footprint changes.
What About APRA-Regulated Entities?
For banks, insurers, and superannuation funds regulated by APRA, the Privacy Act changes layer on top of existing obligations under CPS 234 (Information Security) and the broader APRA data governance framework. APRA expects regulated entities to maintain risk management systems that cover all information assets — and AI systems processing customer data clearly fall within scope.
APRA has also signalled increasing scrutiny of AI governance through its Prudential Practice Guide CPG 234. APRA-regulated entities should treat the Privacy Act ADM obligation as a floor, not a ceiling — the expectation is that your AI governance maturity exceeds basic disclosure.
The Statutory Tort: A New Litigation Risk
Since June 2025, individuals can bring direct legal action against organisations for serious privacy invasions under the new statutory tort. An AI system that exposes personal data, makes a consequential automated decision without disclosure, or fails to implement reasonable safeguards is a potential litigation trigger. This shifts AI governance from a compliance checkbox to an active litigation risk management imperative.
The Timeline at a Glance
- December 2024 — POLA Act enacted: stronger penalties, expanded OAIC powers, statutory tort introduced June 2025
- June 2025 — Statutory tort for serious privacy invasions takes effect
- 2026 — OAIC compliance sweeps underway, focusing on privacy policy currency
- December 10, 2026 — ADM transparency obligations in force: privacy policies must disclose AI use in automated decisions
- December 10, 2026 — Children's Online Privacy Code registration deadline
How Aona Helps Australian Enterprises Comply
Aona's AI Governance platform gives Australian enterprises the discovery and control layer they need to meet these obligations:
- AI Discovery: Automatically detects all AI tools in use across your organisation — approved, Shadow AI, and embedded SaaS AI features
- Usage Audit Trail: Maintains a complete log of what data flows through which AI systems — the evidence base your privacy team needs
- Shadow AI Guardrails: Blocks or redirects personal data from entering unauthorised AI tools before it creates a privacy exposure
- Compliance Reporting: Generates the inventory and data flow documentation your legal and privacy teams need to update privacy policies accurately
The December 2026 deadline is closer than it appears. Start with an AI inventory — book a free 15-minute discovery call to see what AI systems are operating across your organisation right now.