AI Adoption in the Energy & Utilities Sector
The energy and utilities sector is undergoing a dual transformation — the energy transition from fossil fuels to renewables, and the digital transformation driven by AI and advanced analytics. These converging forces are creating unprecedented demand for AI capabilities across every aspect of energy operations.
Australian energy companies are deploying AI for grid management and load balancing across increasingly complex networks with distributed generation from rooftop solar, battery storage, and wind farms; predictive maintenance of generation assets, transmission infrastructure, substations, and distribution networks; demand forecasting to optimise generation dispatch, energy trading, and network investment planning; vegetation management using satellite imagery and AI to identify encroachment risks on transmission corridors; customer-facing applications including smart energy management, chatbots, bill prediction, and hardship identification; renewable energy forecasting to predict solar and wind generation for grid stability; and asset management and capital expenditure optimisation across aging infrastructure portfolios.
The commercial imperative is compelling. Australian energy networks manage over $90 billion in regulated assets, and even marginal improvements in asset utilisation, outage prediction, or demand forecasting translate to hundreds of millions in value. AGL, Origin Energy, EnergyAustralia, and the network businesses (Ausgrid, Endeavour Energy, Transgrid, AusNet) are all investing heavily in AI capabilities.
However, the energy sector's critical infrastructure status fundamentally changes the AI governance calculus. Unlike retail or professional services, a cybersecurity incident in the energy sector can cascade into public safety emergencies — blackouts affecting hospitals, water treatment facilities, and emergency services. The Security of Critical Infrastructure Act 2018 (SOCI Act), as amended in 2022, imposes mandatory cybersecurity obligations on energy sector entities, and AI systems that touch operational technology introduce new risk vectors that fall squarely within SOCI's regulatory scope.
The challenge for energy CISOs and CIOs is to enable AI innovation while maintaining the operational resilience and security posture that critical infrastructure demands.
Key AI Security Risks in Energy & Utilities
Energy and utilities organisations face AI security risks that span both digital and physical domains, with potential consequences that extend far beyond the organisation to public safety and national security.
SCADA/ICS AI Integration Risks: The most critical risk in the energy sector is the integration of AI with supervisory control and data acquisition (SCADA) systems and industrial control systems (ICS). AI-powered predictive maintenance, anomaly detection, and grid optimisation systems increasingly consume data from — and in some cases send commands to — operational technology environments. A compromised AI system that interfaces with SCADA could manipulate load balancing, trip circuit breakers, alter generator dispatch instructions, or disable safety interlocks. The 2015 and 2016 attacks on Ukraine's power grid demonstrated that adversaries actively target energy OT systems, and AI integration expands the attack surface.
Adversarial Attacks on Grid Management AI: AI models managing electricity grid operations are vulnerable to adversarial manipulation. Data poisoning attacks could corrupt demand forecasting models, causing incorrect generation dispatch and potentially grid instability. Evasion attacks could cause anomaly detection AI to miss genuine equipment failures or security incidents. Model extraction attacks could allow adversaries to understand and circumvent grid management algorithms.
Operational Technology Data Sensitivity: OT data from energy infrastructure — load profiles, generation capacity, network topology, protection relay settings, and SCADA configurations — is highly sensitive from a national security perspective. When engineering teams use AI tools to analyse this data, they risk exposing critical infrastructure information. Australia's critical infrastructure risk management program (CIRMP) rules require energy entities to identify and mitigate such risks.
Shadow AI in Engineering Teams: Engineers, network planners, and operational staff in energy companies often work with specialised datasets that they may feed into AI tools for analysis. Transmission planning data, network augmentation proposals, generator performance data, and fault analysis reports processed through unapproved AI tools create both security and regulatory compliance risks.
Customer Data and Metadata Sensitivity: Energy retailers collect granular consumption data that reveals detailed patterns of daily life — when occupants are home, sleep patterns, appliance usage, and even health-related equipment use. Smart meter data processed by AI creates significant privacy risks under the Privacy Act and potentially under the Consumer Data Right as it extends to the energy sector.
Third-Party and Supply Chain AI Risks: Energy companies rely on extensive vendor ecosystems — equipment manufacturers, software providers, managed service providers, and consultants — many of whom are introducing AI into their products and services. AI embedded in vendor solutions that connect to energy OT networks creates supply chain risk that must be assessed under SOCI Act obligations.
SOCI Act and AESCSF Compliance for AI Systems
The regulatory framework for energy sector cybersecurity in Australia imposes specific obligations that directly affect AI governance.
Security of Critical Infrastructure Act 2018 (SOCI Act): The SOCI Act, significantly strengthened by the 2022 amendments, applies to critical infrastructure assets in the energy sector including electricity generation, transmission, and distribution networks, gas processing and distribution, and energy market operators. SOCI obligations relevant to AI include the Critical Infrastructure Risk Management Program (CIRMP), which requires responsible entities to identify, manage, and mitigate material risks to critical infrastructure. AI systems that interface with or influence OT operations represent material cyber risk that must be addressed in the CIRMP. The Act also provides government assistance powers — in extreme circumstances, the government can direct entities to take actions to address cyber incidents, including shutting down AI systems that pose a risk to critical infrastructure.
AESCSF (Australian Energy Sector Cyber Security Framework): The AESCSF, developed by AEMO in collaboration with the energy sector, provides a maturity model for energy sector cybersecurity. AI governance should align with AESCSF capability areas including asset management (inventorying AI systems as cyber assets), access control (managing who and what can interact with AI systems), situational awareness (monitoring AI system behaviour for anomalies), and incident response (responding to AI-related security events). The AESCSF assessment process, conducted annually by most energy businesses, should now explicitly evaluate AI-related cybersecurity maturity.
NERC CIP for Companies with US Exposure: Australian energy companies with US operations or subsidiaries must comply with NERC Critical Infrastructure Protection (CIP) standards. NERC CIP-013 (supply chain risk management) is particularly relevant for AI vendor relationships. CIP-005 (electronic security perimeters) affects AI systems that cross security boundaries. CIP-007 (system security management) governs AI system patching, access, and monitoring.
AI-Specific Regulatory Obligations: While there is no AI-specific regulation for the energy sector yet, several regulatory developments are shaping expectations. The Australian Government's voluntary AI Ethics Principles provide a framework that energy regulators reference. The Australian Energy Regulator (AER) is developing expectations around AI use in regulated network businesses, particularly for capital expenditure proposals and revenue determinations that rely on AI modelling. AEMO's operational procedures increasingly reference AI and automated decision-making in market and network operations.
OT Security Standards for AI: AI systems that interact with OT environments should comply with IEC 62443 (industrial cybersecurity), which provides a framework for securing industrial automation and control systems. This includes security levels for AI components in OT architectures, zone and conduit models for AI data flows, and security lifecycle management for AI systems in industrial environments.
Building an AI Governance Framework for Energy Organisations
Energy organisations need AI governance frameworks that address the unique intersection of critical infrastructure protection, OT security, and commercial AI innovation.
Critical Infrastructure AI Governance Committee: Establish a governance body that reflects the critical infrastructure context. Include the CISO and IT security leadership, OT security manager and control systems engineering, Chief Operating Officer or head of operations, Network planning and asset management leadership, Energy market and trading operations, Regulatory affairs and compliance, and Work health and safety. This committee must have authority to approve, restrict, or prohibit AI deployments based on critical infrastructure risk assessment.
AI System Classification for Energy: Implement a classification system that reflects OT safety and security implications. Zone 1 (Critical OT) includes AI systems directly interfacing with SCADA, protection systems, or generation control — these require the highest governance including independent security assessment, safety case analysis, and board-level risk acceptance. Zone 2 (OT-Adjacent) includes AI systems consuming OT data for analytics but with no write access to control systems — these require rigorous data flow controls, network segmentation validation, and OT security review. Zone 3 (Business Critical) includes AI systems processing sensitive business data (trading positions, network plans, customer data) — these require standard enterprise security review plus energy sector-specific data handling assessment. Zone 4 (General Business) includes AI systems for general business functions with no OT or sensitive data exposure — these require baseline security review and acceptable use compliance.
OT/IT Boundary Controls for AI: Implement defence-in-depth controls at the boundary between IT and OT for AI systems. Deploy AI edge computing within the OT perimeter for time-critical operational analytics, reducing the need to transmit OT data to cloud AI services. Use data diodes or one-way gateways for AI systems that need OT data feeds. Never permit cloud-based AI services to have direct network connectivity to OT environments. Implement protocol-aware monitoring at IT/OT boundaries that can detect AI-related anomalous traffic. Maintain air-gapped environments for the most critical control systems, with manual data transfer procedures for any AI analysis.
Vendor Assessment for Energy AI: Energy AI vendors — providers of grid analytics, predictive maintenance, demand forecasting, and market optimisation solutions — require assessment against critical infrastructure standards. Evaluate vendor AESCSF maturity alignment, OT security certifications and experience, data sovereignty (Australian data hosting for critical infrastructure data), security clearance capability for sensitive infrastructure work, incident response and business continuity provisions, and contractual obligations around SOCI Act compliance and government assistance cooperation.
Change Management for Operational AI: AI systems affecting energy operations require rigorous change management aligned with AEMO procedures and network operator protocols. Test AI changes in isolated environments that replicate operational conditions. Conduct staged deployments with monitoring and immediate rollback capability. Maintain manual override and bypass procedures for all AI-assisted operational decisions. Document AI model versions, training data lineage, and performance baselines. Schedule AI system changes during low-risk operational periods with appropriate operational coordination.
Shadow AI Prevention and Workforce AI Governance in Energy
Shadow AI in the energy sector poses risks that extend beyond data privacy to critical infrastructure security and operational safety.
High-Risk Shadow AI Scenarios in Energy: Network engineers pasting transmission line ratings, protection settings, or network topology data into AI tools for analysis. Generation operations staff using AI to interpret turbine performance data or SCADA alarm patterns. Energy traders inputting market position data, bidding strategies, or contract terms into AI for analysis. Field technicians using AI to troubleshoot equipment faults by uploading equipment manuals, diagnostic data, or site photographs. Customer service teams feeding customer complaint data, consumption patterns, or hardship information into AI for response drafting. Network planners using AI to analyse demand forecasts and augmentation proposals containing sensitive network planning data.
The Distributed Workforce Challenge: Energy utilities operate large, geographically distributed workforces — field technicians, line workers, substation operators, generation plant staff — many of whom work remotely or at distributed sites. Traditional enterprise security controls may not extend effectively to these workers, who increasingly use mobile devices and may have limited corporate network connectivity. Shadow AI adoption among field workers — using smartphone AI apps to identify equipment, interpret fault codes, or draft reports — is particularly difficult to detect and control.
Technical Controls for Energy: Implement network segmentation ensuring OT networks have no path to external AI services, even through employee devices. Deploy mobile device management (MDM) with AI application controls for all field workforce devices. Use DLP rules configured for energy-specific data patterns — equipment identifiers, network node references, generation capacity figures, and SCADA system parameters. Implement DNS filtering and web proxy controls blocking known AI service endpoints on both corporate and OT-adjacent networks. Monitor for AI API traffic from engineering workstations and operational systems.
Providing Safe AI Alternatives: Deploy approved AI tools tailored to energy sector needs. Provide an on-premise or private cloud AI platform for engineering analysis that keeps OT data within controlled environments. Offer an approved predictive maintenance AI with proper OT security controls and data handling. Supply a vetted customer analytics AI platform with Privacy Act-compliant data handling for retail operations. Create approved prompt templates and AI workflows for common engineering and operational tasks.
Training and Culture: Energy sector AI training must emphasise the critical infrastructure context. Conduct role-specific training that connects AI security to operational safety and public safety outcomes. Use scenario-based exercises — "what happens if an adversary poisons the demand forecasting model?" — to make risks tangible. Include AI security in operational readiness assessments and safety briefings. Engage with AESCSF training and industry forums (such as the Australian Cyber Security Centre's energy sector programs) to stay current on emerging AI threats to energy infrastructure.