Categories
Leadership & Governance Planning & Policy

Seven Phases of AI Governance for Local Government

A Practical Framework for Agencies Beginning Their AI Journey

Government Technology  |  AI Governance  |  Public Sector

AI is already inside your agency. Vendors are embedding it into the software you use every day — permitting systems, HR platforms, records management tools, and public safety software. In most cases, no one asked. It arrived as a feature update, buried in a release note, with no governance framework in place to receive it.

At the same time, regulations are moving fast. California’s Civil Rights Council implemented binding AI employment regulations in October 2025. Dozens of additional state and local AI laws are in various stages of development. The agencies that will navigate this landscape successfully are not the ones that adopt AI fastest — they are the ones that build the right governance foundations first.

What follows is a condensed version of a practical, phased governance framework developed from real public-sector cybersecurity and advisory experience. It is designed for the city manager, IT director, HR leader, or department head who needs a clear, easy to understand, actionable path — not an academic exercise.

The Core Principle: Governance Before Tools

The most common mistake organizations make is adopting AI tools before establishing the governance infrastructure to use them responsibly. Policies written after deployment are reactive. Data classified after AI ingestion is too late. Vendor contracts negotiated without AI governance language leave agencies legally exposed.

This framework is sequenced deliberately. Each phase builds on the one before it. The goal is not to slow down AI adoption — it is to make adoption sustainable, defensible, and aligned with the communities these agencies serve.

Phase 1: Establish Governance Authority First

Before any technical work begins, designate who is responsible for AI governance in your organization. This may be a named AI Governance Officer, an AI Steering Committee, or a hybrid model. The specific structure matters less than the fact that accountability exists. Without it, governance activities have no center of gravity, decisions stall, and there is no clear point of contact when regulators come asking. Establish a recurring executive reporting cadence from day one — even if early reports are brief. AI governance without executive visibility and informing elected officials is governance in name only.

Phase 2: Data Governance and Classification Must Come First

AI cannot operate safely or compliantly without governed, classified data. Before activating any AI feature, conduct a thorough assessment of where your data lives, how it flows, who owns it, what regulatory requirements apply to it, and which data might be inadvertently exposed to AI systems. Define sensitivity labels — public, internal, confidential, restricted — and educate every employee on how to handle data accordingly. This is the foundation. Everything else depends on it.

Phase 3: Start with Problems, Not Tools

Conduct structured interviews with department leaders to identify real operational challenges that AI might address. For each candidate use case, assess feasibility, operational impact, risks, staffing implications, and full cost — including ongoing expenses for in-house staff or MSP consulting support. Compile the results into an AI Use Case Roadmap, have leadership formally prioritize it, and use it to drive every subsequent resource and tooling decision. Agencies that skip this step end up with duplicate tools, misaligned spending, and AI deployments that solve problems no one had, and, in the end, without a goal.  Funding and resources become wasted efforts.

Phase 4: Build Policy and Vendor Governance Around Your Use Cases

Only after your data governance is established and your use cases are defined should you develop formal AI governance policies. Policies written in the abstract tend to be ignored. Policies built around real, approved use cases get used. Alongside internal policy, embed AI governance requirements language directly into vendor contracts — including mandatory disclosure of sub-processors and data residency/sovereignty, notification obligations if their AI systems are attacked or compromised, resiliency, recovery and liability terms for discriminatory or harmful AI outputs. Agencies that assume vendor contracts handle these issues are almost always wrong.

Phase 5: Organizational Change Management Is Not Optional

The most technically sound AI governance framework will fail if employees do not understand it, trust it, or know what it requires of them. Develop a layered internal communications strategy tailored to different audiences — frontline staff, supervisors, and executives each need different messages. Pair it with role-appropriate training that covers data handling requirements, how to recognize and report unexpected AI behavior, and what the governance policies actually require in practice. Change management is not soft. In local government, it is often the difference between a governance program that lives on paper and one that actually changes behavior.

Phase 6: Regulatory Compliance Requires a Monitoring Function, Not a One-Time Review

A single legal review at the time of AI deployment is not sufficient. California’s AI regulatory landscape alone — covering employment discrimination, automated decision systems, data privacy, and civil rights — is evolving continuously. Assign ongoing responsibility for regulatory monitoring, maintain an internal AI regulatory tracker, conduct periodic reviews with qualified legal counsel, and build civil rights and algorithmic fairness impact assessments into your use case evaluation process. The cost of falling behind is not theoretical: it includes legal exposure, audit findings, loss of public trust, and — in systems that affect critical services — real harm to real people.

Phase 7: Build AI-Specific Incident Response Before You Need It

Traditional cybersecurity incident response plans were not designed for AI-specific failure modes: model poisoning, data corruption through adversarial inputs, rogue agentic AI behavior, or model collapse. Develop AI-specific runbooks that define what normal AI behavior looks like, how anomalies are detected, what triggers a shutdown or rollback of AI agent access and actions, and what the communication obligations are internally and externally. Complement this with red-team and blue-team exercises and annual tabletop simulations that involve IT, legal, HR, and leadership together. The organizations that respond well to AI incidents are the ones that have rehearsed them.

This Is a Governance Model, Not a Checklist

What distinguishes this framework from a deployment checklist is its cyclical nature. Each phase requires a scheduled review cadence, named accountability, and ongoing adaptation. Monthly executive reporting, quarterly roadmap reviews, annual red team exercises, and full governance audits — these are not nice-to-haves. They are what separates a governance program that holds up under regulatory scrutiny from one that looks good on paper until something goes wrong.

Local government agencies are under real pressure to demonstrate responsible AI stewardship to their constituents, their oversight bodies, and their regulators. The agencies that build this foundation now — before the pressure becomes a crisis — will be far better positioned to capture the genuine operational benefits that AI can deliver, while protecting the public trust that makes those agencies effective in the first place.

Responsible AI adoption is not primarily a technology challenge. It is a governance, accountability, and culture challenge. Technology is the easy part.

About the Author Eudora Fleischman – Managing Director of Artemis Technology Advisors and Retired Infrastructure and Cybersecurity Manager & CISO.  Eudora is a government cybersecurity and AI governance advisor with 31 years of technical and leadership experience and 21 of those years working with public-sector organizations on cybersecurity GRC, data security, regulatory compliance, organizational resilience, disaster recovery and responsible technology adoption.
Categories
Actionable Steps Budgeting & Resources Cybersecurity Basics Leadership & Governance Planning & Policy Press Release Tools & Guidance

Announcing the Local Government Officials Guide to Cybersecurity

We are thrilled to announce the official publication of a critical new resource: the Local Government Officials Guide to Cybersecurity (LGOGC)!

This project was developed by the Local Government Cybersecurity Alliance (LGCA) specifically to empower elected and appointed officials—from supervisors and council members to city managers and agency heads—to effectively navigate the increasingly complex world of cyber risk.

Moving Beyond the Technical Jargon

Cybersecurity is not just an IT department problem; it is an enterprise-wide, whole-of-government issue that impacts finance, legal compliance, emergency services, and public trust.

The LGOGC cuts through technical jargon to focus on what matters most to community leaders: governance, accountability, and resilience. This guide was truly built by and for local government professionals, ensuring every concept is practical and immediately relevant to your fiduciary duty to protect the systems that serve your communities.


What the Guide Will Help You Achieve

The LGOGC provides a clear, actionable framework to help local leaders translate responsibility into practical action. Inside, you’ll find guidance to:

  • Integrate cybersecurity into your strategic and budget planning.
  • Strengthen oversight and reporting mechanisms.
  • Align your efforts with nationally recognized frameworks, such as NIST CSF 2.0.
  • Build a culture of cyber resilience that spans all departments and elected offices.

Download and Share Your Feedback

We believe that making cybersecurity governance as natural and necessary as financial oversight is achievable in every county, city, town, village, and district. This guide is a huge step toward that goal.

Download the Local Government Officials Guide to Cybersecurity (LGOGC) now.

We invite your feedback! Tell us how your jurisdiction is addressing these challenges and what resources would be most valuable to you next in our community forum or white paper.

Categories
Leadership & Governance

Governing AI: Ethical Use and Oversight in Local Government

Artificial intelligence (AI) is rapidly transforming how local governments operate—from automating administrative tasks to enhancing public safety and improving service delivery. But as these technologies become more embedded in public systems, so too does the need for thoughtful governance.

AI offers tremendous promise, but it also raises important questions about fairness, accountability, transparency, and privacy. Without clear ethical guidelines and oversight, even well-intentioned AI applications can lead to unintended consequences, such as biased decision-making or erosion of public trust.


Why AI Governance Matters

AI systems often make decisions that affect people’s lives—whether approving permits, prioritizing maintenance, or analyzing public data. Local governments must ensure these systems are used responsibly and align with community values.

Good governance helps:

  • Prevent misuse or overreach.
  • Ensure transparency in how decisions are made.
  • Protect civil liberties and privacy.
  • Build public confidence in digital services.

Key Elements of an AI Governance Framework

1. Ethical Principles

Start with a clear set of guiding values—such as fairness, accountability, transparency, and respect for individual rights. These principles should inform every stage of AI development and deployment.

2. Oversight and Accountability

Establish internal oversight bodies or designate responsible officials to review AI projects. Oversight should include legal, technical, and community perspectives to ensure balanced decision-making.

3. Risk Assessment

Before deploying AI, assess potential risks—such as bias, data privacy concerns, or unintended consequences. Consider how the system might impact different populations and whether safeguards are in place.

4. Transparency and Explainability

Residents should understand how AI systems work and how decisions are made. Use plain language to explain what data is collected, how it’s used, and what rights individuals have.

5. Public Engagement

Involve the community in discussions about AI use. Public input can help shape policies, identify concerns, and ensure that technology serves the public interest.

6. Training and Capacity Building

Ensure staff and leadership understand AI capabilities and limitations. Provide training on ethical considerations, data stewardship, and responsible procurement.


Tools and Frameworks to Guide Implementation

Local governments can draw from established frameworks to guide their AI governance efforts, including:

  • NIST AI Risk Management Framework: Offers a structured approach to identifying and managing AI risks.
  • OECD AI Principles: Promote inclusive growth, human-centered values, and transparency.
  • State and local AI task forces: Some jurisdictions have developed their own guidelines tailored to municipal needs.

These resources can help governments build policies that are both practical and principled.


AI is not just a technical tool—it’s a governance issue. As local governments adopt AI to improve services and efficiency, they must also ensure that these technologies are used ethically and transparently. By establishing clear frameworks, engaging the public, and investing in oversight, municipalities can harness the benefits of AI while safeguarding public trust.