Executive Summary
RSA Conference 2026 dominated the week, delivering an industry consensus that agentic AI governance is the defining security challenge of 2026 while simultaneously revealing that most organizations lack even basic agent containment controls. The conference’s urgency was underscored by a real-world supply chain attack on LiteLLM, a widely used AI proxy gateway, which demonstrated that the open-source AI infrastructure layer has become critical enterprise attack surface defended with inadequate security posture. On the regulatory front, multiple compliance deadlines converged as NIS2 enforcement went live in Poland, Finland’s essential entity compliance deadline passed on March 31, and the DORA Register of Information submission closed, while NIST released new CSF 2.0 Quick-Start Guides and the Cloud Security Alliance launched a dedicated foundation for securing the agentic control plane.
This report covers strategic IT security topics for executive leadership. For tactical CPS/ICS vulnerabilities, see the CPS Threat Intelligence report. For ransomware incidents, see the Ransomware Intelligence report.
Week of March 20 - March 27, 2026
Regulatory and Compliance
The week’s most significant regulatory development was the convergence of multiple enforcement deadlines across jurisdictions. Poland’s amendment to its National Cybersecurity System Act, signed by the President on February 19, became applicable in late March following its one-month vacatio legis period. The law replaces previous classifications with “essential entities” and “important entities” under NIS2, with a registration deadline of October 3, 2026 and full compliance required by April 3, 2027. Enhanced management liability provisions mean that boards must now ensure implementation and oversight of cybersecurity risk management, with penalties reaching 10 million EUR or 2 percent of annual revenue for essential entities. Finland’s March 31 deadline for essential entities to achieve full NIS2 compliance under its Cybersecurity Act 124/2025 brought approximately 5,500 organizations into scope, marking one of the earliest EU member states to reach the active enforcement phase.
The DORA Register of Information submission deadline on March 31 represented the first operational test of the regulation’s third-party risk management provisions. Under Article 28, every financial entity must maintain and submit a comprehensive register documenting all contractual arrangements with ICT third-party service providers. National competent authorities consolidate these submissions and forward them to the European Supervisory Authorities. The submission requirement shifts DORA from theoretical compliance to active regulatory engagement, and the patchwork of enforcement approaches across member states creates additional complexity for multinational financial institutions.
In the United States, the General Services Administration’s IT Security Procedural Guide for Protecting Controlled Unclassified Information in Nonfederal Systems continued to draw analysis. Published in January with minimal notice or comment period, the guide creates CMMC-like requirements for civilian government contractors, including implementation of NIST SP 800-171 Revision 3 security controls, nine “showstopper” controls that must be fully implemented before authorization, and a one-hour cyber incident reporting requirement that is far more aggressive than the DoD’s 72-hour window. The flow-down of CUI security requirements to subcontractors amplifies the compliance burden across the federal civilian supply chain.
CISA’s final CIRCIA incident reporting rule, originally expected in October 2025 and subsequently delayed to May 2026, faces further uncertainty after the agency postponed planned town hall meetings due to a lapse in DHS appropriations. Once finalized, CIRCIA will require 72-hour reporting of covered cyber incidents and 24-hour reporting of ransomware payments, but the compounding delays have left critical infrastructure operators in a prolonged state of regulatory ambiguity.
NIST released two new CSF 2.0 Quick-Start Guides on March 23: SP 1308, addressing the intersection of cybersecurity, enterprise risk management, and workforce management, and a CSF 2.0 Informative References Quick-Start Guide open for public comment until May 6. NIST also published reflections from its Second Cyber AI Profile Workshop, signaling ongoing work to align cybersecurity frameworks with AI governance requirements. The forthcoming NIST Privacy Framework Version 1.1, aligning with CSF 2.0, elevates governance to a cross-cutting standalone function with a new section on AI and privacy risk management.
The UK’s Cyber Security and Resilience Bill passed its Report Stage in the House of Commons and advanced to the House of Lords, with Royal Assent expected later in 2026. The bill expands the scope of NIS Regulations 2018, introduces supply chain cyber risk duties for operators of essential services, and establishes penalties of up to 17 million GBP or 4 percent of global turnover for serious breaches. For organizations already navigating NIS2 and DORA, the UK bill adds another jurisdiction-specific framework that demands harmonized compliance processes.
On the privacy enforcement front, the California Privacy Protection Agency continued its aggressive enforcement posture with a $1.10 million fine against PlayOn Sports for tracking without proper opt-out mechanisms, notably the first action involving school-facing digital platforms, and a $375,000 decision against Ford Motor Company for creating unnecessary friction in opt-out processing. The CPPA’s enforcement pattern signals a shift from paper compliance to technical adequacy, examining whether back-end systems actually honor user preferences rather than merely displaying the required interface elements.
AI Governance and Agentic AI
RSAC 2026, held March 23–26 in San Francisco, was overwhelmingly focused on agentic AI, with approximately 40 percent of the entire agenda AI-weighted and AI no longer relegated to a separate track but pervading every session. The conference produced the clearest picture yet of the gap between agentic AI deployment velocity and governance maturity. Research presented at the event revealed that 63 percent of organizations cannot enforce purpose limitations on their AI agents, 60 percent cannot terminate a misbehaving agent, 55 percent cannot isolate AI systems from their broader networks, and 43 percent use shared service accounts for agents with no individual identity. No single organizational function has claimed clear ownership of AI agent access, creating an accountability vacuum that attackers can exploit.
The Cloud Security Alliance responded to this governance gap by launching CSAI, a new 501(c)(3) non-profit foundation dedicated exclusively to AI security and safety, with a strategic mission of “Securing the Agentic Control Plane.” CSA’s research found that 85 percent of organizations use AI agents in production environments, spanning task automation (67 percent), research (52 percent), developer assistance (50 percent), and security monitoring (50 percent), yet 68 percent cannot clearly distinguish between human and AI agent activity in their environments. The non-human identity perimeter, where service principals, secrets, and autonomous agents outnumber human users by 100 to 1, has become the defining security challenge for identity governance teams.
Multiple vendors launched agentic AI governance solutions at the conference. Astrix Security expanded its Agent Control Plane with a real-time policy engine scoped by user, department, agent platform, and resource type. Entro Security introduced Agentic Governance and Administration, extending identity governance to AI agents. Cisco announced security tools reimagined for the “agentic workforce,” including a Zero Trust for AI Agents framework that focuses on action control rather than just access control, monitoring what agents actually do after gaining access. Microsoft brought its Security Store directly into Microsoft Entra, offering more than 15 identity agents powered by Security Copilot. The industry is converging on “just-in-time governance” where every agent action is authorized in real time based on live context rather than static permissions.
McKinsey’s 2026 AI Trust Maturity Survey, published in March, found that average Responsible AI maturity scores increased to 2.3 out of 5, up from 2.0 in 2025, but only one-third of organizations report maturity level 3 or above in strategy, governance, and agentic AI governance specifically. The survey identified a fundamental shift in the nature of AI risk: organizations must now worry not just about AI “saying the wrong thing” but “doing the wrong thing” — taking unintended actions, misusing tools, or operating beyond defined guardrails. This distinction between generative risk and agentic risk requires fundamentally different governance architectures.
The federal-state AI regulation standoff continued to intensify. The Commerce Department’s evaluation of “onerous” state AI laws, required by Trump’s December 2025 executive order by March 11, has not been published. Meanwhile, 45 states have introduced 1,561 AI-related bills in 2026, already surpassing all of 2024. Bruce Schneier published an essay arguing that AI regulation will emerge as a key voter concern in the US midterms, noting that over 70 percent of likely voters favor state and federal regulators having a role in AI policy, a finding that complicates the administration’s push for industry self-governance.
Board-Level Risk and CISO Strategy
The EY Cybersecurity Roadmap Study findings, released ahead of RSAC, gained additional context during the conference as 97 percent of surveyed senior security leaders agreed that competitive advantage will be directly tied to agentic AI cybersecurity maturity. The study’s projection that AI cybersecurity budget allocation will quintuple from 9 percent to 48 percent of organizations dedicating at least a quarter of their budget reflects a strategic bet that boards are increasingly willing to make, even as governance frameworks lag behind spending commitments. CSO Online’s pre-conference analysis identified five trends that should top CISO agendas at RSAC: agentic AI identity management, platform consolidation, post-quantum readiness, supply chain resilience, and the convergence of security operations and risk management into unified platforms.
The cyber insurance market’s current buyer-favorable dynamics create a narrow strategic window. US pricing remains essentially flat, with early indicators of deceleration in the rate of market softening, while healthcare represents a notable exception with carriers implementing single-digit rate increases. However, carriers are tightening requirements around AI governance controls as a condition of favorable pricing. Organizations that can demonstrate mature AI security programs, including agent identity management, shadow AI governance, and automated threat detection, are securing improved terms, while those without AI-specific controls face growing premium pressure. The global cyber insurance market is projected to reach $118.97 billion by 2032, driven by ransomware risk, regulatory pressure, and AI-driven underwriting innovation.
The Fortinet 2026 Cloud Security Report, produced with Cybersecurity Insiders and based on a survey of 1,163 senior cybersecurity leaders, quantified the “complexity gap” between cloud adoption velocity and security team capacity. The report found that 88 percent of organizations now operate across hybrid or multi-cloud environments, yet 74 percent report an active shortage of qualified cybersecurity professionals and 59 percent remain in the early stages of cloud security maturity. Two-thirds of experts surveyed lack strong confidence in their ability to detect and respond to cloud threats in real time. The report found that if starting from scratch, 64 percent of respondents would design their cybersecurity strategy around a single-vendor platform uniting network, cloud, and application security, a finding that validates the ongoing platform consolidation trend.
Cloud Security Posture
The most consequential cloud security development of the week was Google’s formal completion of its $32 billion acquisition of Wiz, with the Wiz team joining Google Cloud while retaining the Wiz brand. For the multi-cloud security market, the acquisition creates both opportunity and risk. Wiz’s graph-powered platform, which provides visibility across AWS, Azure, Google Cloud, and other environments, was valued precisely for its cloud-agnostic posture. Its absorption into a single hyperscaler’s ecosystem raises questions about whether that neutrality will endure. Organizations relying on Wiz for cross-cloud security monitoring should evaluate the trajectory carefully and maintain contingency options for critical CSPM and CNAPP capabilities.
Wiz announced the AI Application Protection Platform (AI-APP) at RSAC, designed to provide full visibility into models, agents, and data flows across AWS Bedrock, Azure AI, and Vertex AI, with protections against prompt injection and shadow AI by mapping complex attack paths and correlating cloud-native risks in a single graph-powered platform. The launch reflects the convergence of cloud security posture management and AI security into a unified discipline.
Infosecurity Magazine reported a continued shift in cloud attack methodology. For the first time in Google Cloud’s Threat Horizons series, threat actors are exploiting third-party software vulnerabilities more frequently than weak or missing credentials as their initial access vector into cloud environments. The exploitation window has compressed from weeks to days, demanding that organizations move from periodic patching cadences to continuous exposure management. The 97 percent of organizations that experienced an identity or network access incident in the past year, with 70 percent reporting incidents tied to AI-related activity, underscore that cloud security posture can no longer be evaluated separately from identity governance and AI agent management.
Identity, Access Management and Zero Trust
RSAC 2026 cemented identity as the primary attack surface for modern enterprises. Microsoft’s Entra innovations, announced at the conference, reflect the platform’s evolution toward an identity fabric that spans human users, AI agents, and machine identities within a single governance model. The introduction of identity agents powered by Security Copilot and the integration of a Security Store directly into the Entra portal signal Microsoft’s intent to make identity governance the operational center of enterprise security.
The conference highlighted a critical gap in identity management: while 83 percent of organizations now use centralized identity providers to enforce conditional access, the governance of non-human identities remains fragmented. The CSA’s finding that organizations manage a non-human identity perimeter where service principals, secrets, and autonomous agents outnumber human users by 100 to 1 was reinforced by multiple vendor announcements targeting this gap. AccuKnox launched AI-Security 2.0, positioning its platform as an identity-powered zero trust framework built specifically for securing AI models, agents, and data. Keeper Security celebrated 18 industry award wins at the conference, reflecting the company’s focus on unified, cloud-native identity security and privileged access protection.
Cisco’s Zero Trust for AI Agents framework, announced at RSAC, represents a conceptual advance beyond traditional access control. The framework focuses on action control — monitoring what AI agents actually do after gaining access — rather than simply gating initial authentication. This distinction matters because agentic AI systems routinely perform sequences of actions across multiple services that resemble legitimate administrative activity, making traditional binary access decisions insufficient. The framework’s emphasis on continuous behavioral monitoring aligns with the broader industry movement toward real-time risk evaluation for every interaction, whether initiated by a human or an autonomous agent.
Vendor and Supply Chain Risk
The LiteLLM supply chain attack on March 24 was the week’s most significant operational security incident with strategic implications. LiteLLM, a widely used Python LLM proxy gateway downloaded approximately 3.4 million times per day, was compromised by threat actor TeamPCP through a sophisticated multi-hop attack chain. The attackers first compromised Aqua Security’s Trivy vulnerability scanner on March 19, then pivoted to Checkmarx’s code analysis platform, and used that foothold to obtain PyPI publishing credentials of a LiteLLM maintainer. Malicious versions 1.82.7 and 1.82.8 contained a multi-stage credential stealer targeting SSH keys, cloud provider tokens, Kubernetes secrets, cryptocurrency wallets, and environment files. The exposure window was approximately three hours before removal from PyPI.
The strategic significance of this attack extends beyond its immediate impact. LiteLLM sits at the center of the AI stack as a gateway between applications and multiple LLM providers, representing exactly the kind of AI infrastructure dependency that most organizations have not inventoried, let alone secured. As Trend Micro’s analysis noted, the attack demonstrated that the open-source AI supply chain is now critical enterprise infrastructure “defended with the security posture of a side project.” For CISOs, the LiteLLM incident demands an immediate inventory of AI infrastructure dependencies, including LLM gateways, model proxies, orchestration frameworks, and agent platforms, with the same rigor applied to traditional software supply chain management.
The attack also revealed the interconnected nature of modern supply chain compromises. TeamPCP’s multi-hop approach, moving from a vulnerability scanner to a code analysis platform to a machine learning proxy, illustrates how adversaries exploit trust relationships between security and development tools rather than targeting application code directly. The compromised tools were themselves security products, creating a recursive vulnerability that undermines the integrity of the very systems organizations use to validate their supply chains.
Industry Surveys and Research
The McKinsey 2026 AI Trust Maturity Survey provides the most granular assessment of enterprise AI governance readiness this cycle. The finding that only one-third of organizations achieve maturity level 3 or above in agentic AI governance, despite 85 percent deploying agents in production, quantifies the governance debt that the industry is accumulating. McKinsey’s distinction between generative AI risk (outputs that are wrong or harmful) and agentic AI risk (actions that are unintended or exceed guardrails) offers a useful framework for boards seeking to understand why their existing AI governance programs may be inadequate for the agents they are deploying.
The Fortinet cloud complexity gap data deserves board-level attention for a specific reason: the 64 percent of respondents who would choose a single-vendor platform if starting fresh represents a market signal that CISOs should interpret carefully. Platform consolidation offers operational simplicity but concentrates risk, and the Google-Wiz acquisition illustrates how hyperscaler absorption of independent security vendors can shift that equation rapidly. Boards should ensure that platform consolidation decisions include explicit vendor concentration risk assessment and contingency planning.
The CSA’s finding that the prevalence of workloads that are simultaneously publicly accessible, critically vulnerable, and highly privileged has declined from 38 percent in early 2024 to 29 percent by mid-2025 offers a rare positive data point. The improvement is driven largely by widespread adoption of identity best practices, with 83 percent of organizations now using centralized identity providers. However, this progress predates the explosion of agentic AI deployments, and the 68 percent of organizations that cannot distinguish human from AI agent activity may find that their improved baseline is eroded by a new category of identity risk they are not yet measuring.
Strategic Recommendations
Inventory and secure AI infrastructure dependencies immediately. The LiteLLM supply chain attack demonstrated that AI-layer dependencies including LLM gateways, model proxies, orchestration frameworks, and agent platforms represent critical infrastructure that most organizations have not mapped. Conduct an AI-specific supply chain audit, identify single points of failure in the AI stack, and implement monitoring for dependency integrity with the same rigor applied to traditional software supply chains.
Close the agentic AI governance gap revealed at RSAC. With 63 percent of organizations unable to enforce purpose limitations on AI agents and 60 percent unable to terminate misbehaving agents, the governance deficit is no longer theoretical. Evaluate solutions announced at RSAC from vendors like Astrix, Entro, Cisco, and Microsoft for agent discovery, identity governance, and runtime behavioral monitoring. At minimum, establish the ability to identify all AI agents operating in the environment, enforce individual agent identities rather than shared service accounts, and terminate agent activity when anomalous behavior is detected.
Prepare for the Q2 2026 compliance compression. CIRCIA final rules are expected in May, SEC Regulation S-P cybersecurity amendments for smaller entities take effect June 3, NIST CSF 2.0 comment periods close May 6, and the EU CRA vulnerability reporting obligations begin September 11. Map each deadline against your organization’s current compliance posture and allocate resources now rather than addressing each obligation serially.
Evaluate vendor concentration risk in cloud security. The Google-Wiz acquisition fundamentally changes the multi-cloud security landscape. Organizations using Wiz for cloud-agnostic security should assess whether the platform’s cross-cloud neutrality will endure under Google ownership and identify alternative CSPM and CNAPP providers for critical capabilities. More broadly, the Fortinet finding that 64 percent would choose single-vendor platforms validates consolidation but demands explicit risk assessment of the concentration it creates.
Engage with NIST’s evolving AI-cybersecurity alignment. The CSF 2.0 Quick-Start Guides released March 23 and the Cyber AI Profile Workshop reflections signal that NIST is actively building bridges between cybersecurity frameworks and AI governance. Organizations should submit comments on the CSF 2.0 Informative References Guide before May 6 and monitor the Privacy Framework Version 1.1 finalization, particularly its new AI and privacy risk management provisions.
Sources Referenced
RSS/Primary Sources
- CSO Online — Five trends for CISO RSAC 2026 agendas
- Infosecurity Magazine — Cloud attack methodology shift, RSAC coverage
- Schneier on Security — AI as 2026 midterm election issue
- GovInfoSecurity — CISA CIRCIA delays, GSA CUI requirements
- Lawfare Blog — Trump cyber strategy analysis
- IAPP — State breach notification updates
Web Search Discoveries
- SiliconANGLE — Agentic AI governance gaps at RSAC 2026
- SiliconANGLE — AI agent identity as enterprise priority
- TechRepublic — RSAC 2026 agentic AI governance insights
- Microsoft Security Blog — Secure agentic AI end-to-end
- Microsoft Entra Blog — Entra innovations at RSAC 2026
- Cloud Security Alliance — CSAI Foundation launch
- Cloud Security Alliance — AI agent vs. human activity distinction
- McKinsey — State of AI Trust in 2026
- EY Newsroom — Cybersecurity Roadmap Study
- Fortinet Blog — 2026 Cloud Security Complexity Gap Report
- Trend Micro — LiteLLM Supply Chain Compromise analysis
- LiteLLM — Official security update
- Help Net Security — Astrix AI agent security platform
- NIST — CSF 2.0 Quick-Start Guides
- Addleshaw Goddard — Poland NIS2 implementation
- Thomas Murray — DORA Register of Information guidance
- Holland & Knight — GSA CUI security requirements
- CyberScoop — CIRCIA rule delays
- Security Boulevard — UK Cyber Security and Resilience Bill
- Troutman Pepper Locke — State AI law tracker
- Ropes & Gray — Federal push to override state AI regulation