AI in the Public Sector: Opportunities and Obligations
Government agencies at every level are exploring and deploying AI to improve service delivery, enhance decision-making, and increase operational efficiency. From the federal AI strategy to state and local digital transformation initiatives, the public sector is embracing AI across a wide range of applications.
Common government AI use cases include citizen service automation through chatbots and virtual assistants, fraud detection in benefits programs, document processing and classification for FOIA requests, predictive analytics for resource allocation, cybersecurity threat detection, intelligence analysis and national security applications, and regulatory compliance monitoring.
However, government AI deployment carries unique obligations. Public agencies must maintain transparency and accountability to citizens, protect classified and controlled unclassified information (CUI), comply with acquisition and procurement regulations, ensure equitable service delivery across all populations, and preserve civil liberties and constitutional rights. The Executive Order on Safe, Secure, and Trustworthy AI (EO 14110) established new requirements for federal agencies, including AI risk management, testing requirements, and transparency standards.
Security Classification and AI
Government agencies must carefully manage the intersection of AI tools and information classification levels.
Classified Information and AI: AI tools must never be used with classified information unless they operate within appropriately accredited classified environments. This means no commercial AI services for any classified data processing, air-gapped AI environments for classified workloads, cleared personnel requirements for classified AI operations, and SCIFs and cleared facilities for classified AI discussions.
Controlled Unclassified Information (CUI): CUI represents a significant challenge for government AI adoption. Many agencies want to use AI for processing CUI — contracting data, personally identifiable information, law enforcement sensitive data — but must ensure AI tools meet CUI protection requirements under NIST SP 800-171.
FedRAMP Requirements: Cloud-based AI services used by federal agencies must achieve FedRAMP authorization. This requires rigorous security assessment including over 300 security controls at the moderate baseline, continuous monitoring and regular reassessment, incident response capabilities, and supply chain risk management.
Impact Level Considerations: AI services must be authorized at the appropriate impact level. IL2 for public and non-CUI data, IL4 for CUI, IL5 for CUI in DoD environments, and IL6 for classified information up to Secret. Most commercial AI services currently max out at IL2 or IL4, limiting their applicability for sensitive government workloads.
NIST AI Risk Management Framework
The NIST AI Risk Management Framework (AI RMF) provides a voluntary framework for managing AI risks that is becoming the de facto standard for government AI governance.
Core Functions: The AI RMF organizes AI risk management into four core functions. Govern establishes and maintains the organizational structures, policies, and processes for AI risk management. Map identifies and catalogs AI systems, their contexts, and potential impacts. Measure assesses and monitors AI risks through quantitative and qualitative methods. Manage implements risk treatment strategies and response actions.
Implementing AI RMF in Government: Government agencies should map AI RMF functions to existing risk management processes. Integrate AI risk management with agency Enterprise Risk Management (ERM), align AI controls with NIST Cybersecurity Framework and SP 800-53, leverage existing authorization processes (ATO) for AI system assessment, and incorporate AI risk into agency FISMA reporting.
Trustworthy AI Characteristics: The AI RMF identifies seven characteristics of trustworthy AI that government agencies should evaluate. These are validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy enhancement, and fairness with managed harmful bias.
AI Governance for Federal Agencies
Federal agencies face specific governance requirements for AI deployment.
Chief AI Officer Requirements: EO 14110 requires federal agencies to designate Chief AI Officers responsible for coordinating AI use across the agency, managing AI risks, promoting AI innovation, ensuring compliance with AI policies, and reporting on AI usage and governance.
AI Use Case Inventory: Federal agencies must maintain and publish inventories of their AI use cases. This inventory should include the AI system's purpose and function, data inputs and outputs, impact assessment results, risk mitigation measures, and human oversight mechanisms.
Rights-Impacting AI: AI systems that impact individual rights — such as benefit eligibility determinations, law enforcement decisions, or immigration processing — require heightened scrutiny including impact assessments evaluating effects on civil liberties, notice to affected individuals that AI is being used, appeal mechanisms for AI-assisted decisions, regular auditing for bias and accuracy, and human review of consequential decisions.
Procurement and Acquisition: Government AI procurement must comply with FAR and agency-specific acquisition regulations. Include AI-specific requirements in contracts regarding data ownership and handling, model transparency and explainability, performance monitoring and reporting, bias testing and fairness requirements, and security and privacy controls.
Securing AI in Government Operations
Practical security measures for government AI deployment across common use cases.
Citizen-Facing AI Services: Government chatbots and virtual assistants interacting with the public must clearly identify as AI systems (no deceptive design), comply with Section 508 accessibility requirements, protect personally identifiable information (PII), provide accurate information based on authoritative sources, offer pathways to human assistance, and maintain logs for accountability and FOIA compliance.
Internal AI Tools: Agency staff using AI for document drafting, analysis, or research should be provided with approved AI tools that meet security requirements, clear guidance on what data can be used with AI, training on AI limitations and verification requirements, and reporting mechanisms for AI errors or concerns.
AI in Law Enforcement and National Security: AI used in law enforcement and national security contexts carries exceptional risk and requires the highest level of governance. Facial recognition and biometric AI must comply with agency privacy policies, predictive policing tools must be evaluated for bias and civil liberties impact, intelligence analysis AI must protect sources and methods, and all law enforcement AI should include human decision-making authority.
AI for Cybersecurity: Government agencies increasingly use AI for cyber defense. These tools must be integrated with existing security operations, validated against agency threat models, monitored for adversarial manipulation, and compliant with CDM (Continuous Diagnostics and Mitigation) program requirements.
State and Local Government Considerations
State and local governments face unique AI security challenges different from federal agencies.
Resource Constraints: Many state and local agencies lack dedicated AI security expertise and budget. Start with foundational steps: adopt existing frameworks (NIST AI RMF, MS-ISAC guidance), leverage state-level shared services and cooperative agreements, participate in information sharing through MS-ISAC and state CISOs, and prioritize high-risk AI use cases for governance attention.
State AI Legislation: A growing number of states are enacting AI-specific legislation. Colorado, California, Illinois, and others have passed or proposed laws affecting government AI use. Track relevant state legislation, assess compliance requirements, update policies as new laws take effect, and coordinate with state attorneys general on AI compliance.
Intergovernmental Data Sharing: State and local agencies often share data across jurisdictions. AI tools processing shared data must comply with all parties' data use agreements, maintain data sovereignty requirements, implement access controls reflecting multi-jurisdictional access, and document AI processing in data sharing agreements.
Public Transparency: State and local governments often face heightened public transparency requirements. Publish AI use policies and inventories, provide public notice of AI use in government services, create mechanisms for public input on AI deployment decisions, and report on AI system performance and outcomes.
Building Public Trust in Government AI
Government AI governance ultimately serves the goal of maintaining public trust.
Transparency Measures: Publish clear policies about how and where the agency uses AI, provide plain-language explanations of AI systems affecting the public, make AI impact assessments available for review, report on AI system performance metrics and error rates, and engage with civil society organizations on AI governance.
Accountability Mechanisms: Establish clear lines of accountability for AI decisions, implement appeal processes for AI-assisted determinations, conduct regular audits of AI systems for bias and accuracy, maintain detailed logs for oversight and investigation, and provide inspector general access to AI systems and data.
Equity and Fairness: Government has a special obligation to ensure equitable service delivery. Test AI systems for disparate impact across demographic groups, monitor outcomes for bias on an ongoing basis, provide alternative non-AI service channels for citizens, ensure AI accessibility for people with disabilities, and engage underserved communities in AI deployment decisions.
Government agencies that prioritize transparent, accountable, and equitable AI governance will build the public trust necessary to realize AI's potential for improved public services.
