Categories
Budgeting & Resources Leadership & Governance

You Paid For The Lock — Now USE IT!

The Gap Between Owning & Fully “Implemented” Cyber Tooling You Already Own

You fought for the budget. You built the business case, presented the risk landscape to leadership, justified every line item, and won cybersecurity funding. New tools were purchased — Identity and Access Management, advanced EDR/XDR, SIEM and more. Boxes checked. Audit requirements are satisfied. A genuine win.

But here is the uncomfortable question nobody asks in the post-purchase debrief: did you actually fully implement them?

Not install. Not license. Implement — fully configured, integrated into your architecture, with every feature activated, monitored and tested. Because there is a dangerous gap between owning a security tool and deriving security from it. And that gap is exactly where attackers live.

Only 14%of organizations are confident they have the people and skills required to meet their cybersecurity needs today — WEF Global Cybersecurity Outlook 2025

The Stryker Wake-Up Call

On March 11, 2026, medical technology giant Stryker suffered a devastating cyberattack that wiped data from thousands of employee and personal devices across 79 offices worldwide. The attackers — an Iran-linked group — did not deploy malware. They did not exploit a zero-day vulnerability. They simply obtained high-privilege administrative credentials and weaponized Microsoft Intune’s Remote Wipe feature, a legitimate IT management tool built for lost or stolen device recovery, to factory-reset tens of thousands of enrolled devices simultaneously.

The lesson is not that Intune is dangerous. The lesson is that privileged access was not properly governed, identity boundaries between on-premises and cloud environments were not enforced, and monitoring either did not exist or did not trigger fast enough. All of these are configuration failures in tools organizations already owned.

The attackers did not break in through a sophisticated exploit. They walked through a door left open by an incomplete implementation.

The Preparedness Gap Is Real — and Growing

The Stryker attack is not an anomaly. It is a symptom of an industry-wide crisis that the World Economic Forum’s (WEF) Global Cybersecurity Outlook 2025 has quantified and it is sobering.

72%of organizations report that cyber risks increased in the past year — WEF Global Cybersecurity Outlook 2025
2 in 3organizations report moderate-to-critical cybersecurity skills gaps, lacking the talent needed to meet their security requirements — WEF Global Cybersecurity Outlook 2025
54%of large organizations cite third-party and supply chain risk management as their biggest barrier to achieving cyber resilience — WEF Global Cybersecurity Outlook 2025
35%of small organizations believe their cyber resilience is inadequate — a proportion that has increased sevenfold since 2022 — WEF Global Cybersecurity Outlook 2025

These numbers describe an industry buying security and not implementing it. Organizations are acquiring the tools, but the talent, architecture, and operational discipline needed to extract full value from those investments is not keeping pace. The result is a fleet of half-deployed, partially configured tools that create a false sense of security while leaving real gaps wide open.

The Agentic AI Threat Multiplier

Attackers are not waiting for organizations to catch up. Generative AI is reshaping the cybercrime landscape at an accelerating pace and the gap between offense and defense is widening.

47%of organizations cite the advance of adversarial AI capabilities — including AI-enhanced phishing, malware development and deepfakes — as their primary GenAI cybersecurity concern — WEF 2025
66%of organizations believe AI will have the most significant impact on cybersecurity in the next 12 months, yet only 37% have processes in place to assess the security of AI tools before deployment — WEF 2025

In an Agentic AI attack scenario where AI autonomously chains together reconnaissance, credential harvesting, lateral movement and execution — a monolithic, single-vendor security stack is a structural liability. If the attacker understands your provider’s architecture better than you do, and your tools are not fully configured, they will find the path of least resistance.

This is not hypothetical. It is the architecture of the Stryker attack, translated into the AI era.

Do You Have the Talent to Use What You Bought?

Before the next purchase order is signed, every security leader, technical and executive alike, should answer these questions honestly:

  • Do we have in-house expertise to fully configure and operationalize the features in our existing tools?
  • Was our tool selection driven by a holistic architecture strategy, or were tools purchased reactively to satisfy an audit requirement?
  • Are all features within our EDR/XDR, IAM, and SIEM platforms fully activated, integrated, and effectively monitored?
  • Do we have unified, normalized logging across every layer of our technology stack feeding a well-configured and monitored dashboard?
  • Is every vendor connection to our environment governed by Zero Trust principles — remote browser isolation, VPN-less access, and Just-In-Time privileged access with approval notification chains configured?

If the honest answer to any of these is ‘no’ or ‘I’m not sure,’ you are not alone — but you are exposed.

A Layered, Heterogeneous Defense: The Architecture That Holds

A monolithic, single-vendor solution may be cost-effective and operationally convenient. But in an Agentic AI threat environment, it is a single point of architectural failure. A breach that understands one vendor’s toolset can traverse your entire environment.

A heterogeneous, layered defense, built intentionally, implemented fully, and integrated across every layer of your stack is a fundamentally different proposition for an attacker. When one protective layer is compromised, the next one holds. The following architecture has proven itself in real-world attack scenarios:

External Perimeter

  • SASE and next-generation firewall with full north-south traffic decryption and inspection and integrated real time defense
  • Advanced API gateways for all internet-facing applications with bot detection and agentic AI defense capabilities
  • All vendor and third-party remote access governed exclusively through remote browser isolation and VPN-less Zero Trust Network Access (ZTNA)

Internal Network

  • Switch-to-switch encryption across internal network segments
  • Micro-segmentation with east-west firewall inspection, full traffic decryption, and XDR/API integration with network admission control policies
  • Patch panel and port-level monitoring via MAC device admission control with logging and firewall integration feeding XDR

Endpoint

  • EDR/XDR deployed with all features fully activated
  • Consider stacking heterogeneous endpoint agents from different vendors — if one provider’s agent is compromised or bypassed, a second independent layer remains active

Identity and Privileged Access

  • Isolate privileged identities: on-premises admins must not carry high-privilege roles in Microsoft 365 or Entra ID — a critical lesson from the Stryker attack
  • Deploy Entra Private Access for Domain Controllers, extending Conditional Access and MFA requirements to sensitive Active Directory operations including LDAP and Kerberos
  • Implement Just-In-Time (JIT) access with approval workflows for all privileged identity management (PIM) accounts
  • Replace manual service account passwords with Active Directory Group Managed Service Accounts (gMSAs)
  • Rotate the KRBTGT password at minimum twice per year; in a breach scenario, rotate immediately — do not wait
  • Restrict all Domain Controller network access; ensure DCs cannot directly reach the internet
  • Audit and enforce strict anomaly monitoring across all security logs

Cloud Security

  • Conduct cloud security posture reviews frequently — cloud providers release new security features continuously; newly available controls should be assessed and implemented as a priority, not deferred
  • Consider disabling Password Hash Sync to keep credential validation on-premises through pass-through authentication or federation
  • All Saas tenant entry points should be isolated to just your agency IP block, with access for remote users only via browser based isolation and ZTNA solutions and fronted by advanced application gateways or proxies
A layered, heterogeneous defense does not require unlimited budget. It requires deliberate architecture and full implementation of the tools you already own.

How to Close the Configuration Gap Without Starting Over

1. Request a Free Implementation Assessment From Your Vendors

Most enterprise security vendors will conduct a complimentary implementation health check if asked directly. They will identify misconfigured features, unused capabilities, and integration gaps. Many will also provide staff education sessions at no additional cost. This is one of the highest-ROI actions available to any security team and it costs nothing but time.

2. Consider an MSP With Cybersecurity Depth

If in-house talent is the constraint — and the WEF data confirms it is for the majority of organizations — a Managed Security Service Provider (MSSP) with genuine cybersecurity staff, 24/7 monitoring capabilities, and a contractual cyber retainer for incident response is not a cost; it is a force multiplier. The right MSSP partner helps you operationalize the tools you already own and ensures that someone is watching when your team cannot be.

3. Build a Unified Visibility Layer

Every device, every endpoint, every cloud workload, every network segment should feed normalized logs into a centralized, well-configured SIEM or XDR dashboard. Visibility gaps are where attackers operate undetected. Unified logging is not glamorous, but it is foundational.

4. Prioritize Identity Above All Else

The Stryker attack was an identity attack. The WEF report confirms that identity theft has become the top personal cyber risk for both CISOs and CEOs in 2025. If you can only harden one area this quarter, harden identity: implement JIT access, enforce MFA everywhere without exception, isolate privileged accounts, and audit every administrative role in both your on-premises and cloud environments.

5. Review Attack Anatomy Regularly

An easy way to have a leg up on all attacks is to regularly review the anatomy of attacks.  This is a free and easy way to identify gaps within your architecture and alerting.  You can implement additional custom alerts from the indicators of compromise you review in attack anatomy, address configuration updates and hardening, and review with your team or your Managed Service Provider.  Attack review should be part of your day-to-day operations.  You cannot protect against what you do not understand.  Also, you cannot harden architecture if you have not operationalized architecture review.

The Bottom Line

The cybersecurity industry has a spending problem masquerading as a security problem. Organizations are acquiring tools at scale while the skills gap required to effectively implement them grows faster than the workforce can fill it. The result is a fleet of expensive, partially deployed technology that creates compliance confidence without creating actual resilience.

The WEF Global Cybersecurity Outlook 2025 reports that 49% of public-sector organizations lack the talent to meet their cybersecurity goals — an increase of 33% in a single year. The private sector is not immune.

The answer is not more tools. It is full implementation of the tools you already own, a deliberate layered heterogeneous architecture designed to survive a breach of any single component, and the operational talent — whether in-house or through a trusted partner — to run it.

You paid for the lock.

Now use it.

About the Author

Eudora Fleischman  |  Infrastructure Architect & Retired CISO Eudora Fleischman is the Infrastructure Architect and Retired with over 31 years of experience in infrastructure architecture, cybersecurity, governance risk, and disaster recovery management and serves as an Advising Member of the Local Government Cybersecurity Alliance.

Sources

World Economic Forum — Global Cybersecurity Outlook 2025 (January 2025, in collaboration with Accenture)

Stryker SEC Filing & Incident Reports, March 2026

Categories
Leadership & Governance Planning & Policy

Seven Phases of AI Governance for Local Government

A Practical Framework for Agencies Beginning Their AI Journey

Government Technology  |  AI Governance  |  Public Sector

AI is already inside your agency. Vendors are embedding it into the software you use every day — permitting systems, HR platforms, records management tools, and public safety software. In most cases, no one asked. It arrived as a feature update, buried in a release note, with no governance framework in place to receive it.

At the same time, regulations are moving fast. California’s Civil Rights Council implemented binding AI employment regulations in October 2025. Dozens of additional state and local AI laws are in various stages of development. The agencies that will navigate this landscape successfully are not the ones that adopt AI fastest — they are the ones that build the right governance foundations first.

What follows is a condensed version of a practical, phased governance framework developed from real public-sector cybersecurity and advisory experience. It is designed for the city manager, IT director, HR leader, or department head who needs a clear, easy to understand, actionable path — not an academic exercise.

The Core Principle: Governance Before Tools

The most common mistake organizations make is adopting AI tools before establishing the governance infrastructure to use them responsibly. Policies written after deployment are reactive. Data classified after AI ingestion is too late. Vendor contracts negotiated without AI governance language leave agencies legally exposed.

This framework is sequenced deliberately. Each phase builds on the one before it. The goal is not to slow down AI adoption — it is to make adoption sustainable, defensible, and aligned with the communities these agencies serve.

Phase 1: Establish Governance Authority First

Before any technical work begins, designate who is responsible for AI governance in your organization. This may be a named AI Governance Officer, an AI Steering Committee, or a hybrid model. The specific structure matters less than the fact that accountability exists. Without it, governance activities have no center of gravity, decisions stall, and there is no clear point of contact when regulators come asking. Establish a recurring executive reporting cadence from day one — even if early reports are brief. AI governance without executive visibility and informing elected officials is governance in name only.

Phase 2: Data Governance and Classification Must Come First

AI cannot operate safely or compliantly without governed, classified data. Before activating any AI feature, conduct a thorough assessment of where your data lives, how it flows, who owns it, what regulatory requirements apply to it, and which data might be inadvertently exposed to AI systems. Define sensitivity labels — public, internal, confidential, restricted — and educate every employee on how to handle data accordingly. This is the foundation. Everything else depends on it.

Phase 3: Start with Problems, Not Tools

Conduct structured interviews with department leaders to identify real operational challenges that AI might address. For each candidate use case, assess feasibility, operational impact, risks, staffing implications, and full cost — including ongoing expenses for in-house staff or MSP consulting support. Compile the results into an AI Use Case Roadmap, have leadership formally prioritize it, and use it to drive every subsequent resource and tooling decision. Agencies that skip this step end up with duplicate tools, misaligned spending, and AI deployments that solve problems no one had, and, in the end, without a goal.  Funding and resources become wasted efforts.

Phase 4: Build Policy and Vendor Governance Around Your Use Cases

Only after your data governance is established and your use cases are defined should you develop formal AI governance policies. Policies written in the abstract tend to be ignored. Policies built around real, approved use cases get used. Alongside internal policy, embed AI governance requirements language directly into vendor contracts — including mandatory disclosure of sub-processors and data residency/sovereignty, notification obligations if their AI systems are attacked or compromised, resiliency, recovery and liability terms for discriminatory or harmful AI outputs. Agencies that assume vendor contracts handle these issues are almost always wrong.

Phase 5: Organizational Change Management Is Not Optional

The most technically sound AI governance framework will fail if employees do not understand it, trust it, or know what it requires of them. Develop a layered internal communications strategy tailored to different audiences — frontline staff, supervisors, and executives each need different messages. Pair it with role-appropriate training that covers data handling requirements, how to recognize and report unexpected AI behavior, and what the governance policies actually require in practice. Change management is not soft. In local government, it is often the difference between a governance program that lives on paper and one that actually changes behavior.

Phase 6: Regulatory Compliance Requires a Monitoring Function, Not a One-Time Review

A single legal review at the time of AI deployment is not sufficient. California’s AI regulatory landscape alone — covering employment discrimination, automated decision systems, data privacy, and civil rights — is evolving continuously. Assign ongoing responsibility for regulatory monitoring, maintain an internal AI regulatory tracker, conduct periodic reviews with qualified legal counsel, and build civil rights and algorithmic fairness impact assessments into your use case evaluation process. The cost of falling behind is not theoretical: it includes legal exposure, audit findings, loss of public trust, and — in systems that affect critical services — real harm to real people.

Phase 7: Build AI-Specific Incident Response Before You Need It

Traditional cybersecurity incident response plans were not designed for AI-specific failure modes: model poisoning, data corruption through adversarial inputs, rogue agentic AI behavior, or model collapse. Develop AI-specific runbooks that define what normal AI behavior looks like, how anomalies are detected, what triggers a shutdown or rollback of AI agent access and actions, and what the communication obligations are internally and externally. Complement this with red-team and blue-team exercises and annual tabletop simulations that involve IT, legal, HR, and leadership together. The organizations that respond well to AI incidents are the ones that have rehearsed them.

This Is a Governance Model, Not a Checklist

What distinguishes this framework from a deployment checklist is its cyclical nature. Each phase requires a scheduled review cadence, named accountability, and ongoing adaptation. Monthly executive reporting, quarterly roadmap reviews, annual red team exercises, and full governance audits — these are not nice-to-haves. They are what separates a governance program that holds up under regulatory scrutiny from one that looks good on paper until something goes wrong.

Local government agencies are under real pressure to demonstrate responsible AI stewardship to their constituents, their oversight bodies, and their regulators. The agencies that build this foundation now — before the pressure becomes a crisis — will be far better positioned to capture the genuine operational benefits that AI can deliver, while protecting the public trust that makes those agencies effective in the first place.

Responsible AI adoption is not primarily a technology challenge. It is a governance, accountability, and culture challenge. Technology is the easy part.

About the Author Eudora Fleischman – Managing Director of Artemis Technology Advisors and Retired Infrastructure and Cybersecurity Manager & CISO.  Eudora is a government cybersecurity and AI governance advisor with 31 years of technical and leadership experience and 21 of those years working with public-sector organizations on cybersecurity GRC, data security, regulatory compliance, organizational resilience, disaster recovery and responsible technology adoption.