CIO/CISO ITsec Summary week 06, 2026

Week 6 saw Gartner name agentic AI oversight and post-quantum cryptography among its top six cybersecurity trends for 2026, OMB rescind Biden-era software attestation requirements in favor of a risk-based model, and the EU unconditionally approve Google’s $32B Wiz acquisition — while the SEC’s Regulation S-P compliance deadline arrived for large firms and CISA launched a new insider threat framework amid its own workforce reductions.
itsec
Published

February 11, 2026

Executive Summary

The week of January 30 – February 6, 2026 was shaped by three converging forces: a landmark Gartner report naming agentic AI oversight and post-quantum cryptography among the top six cybersecurity trends for 2026, the Trump administration’s OMB Memorandum M-26-05 rescinding Biden-era software attestation mandates in favor of agency-led risk-based approaches, and the EU’s unconditional approval of Google’s $32 billion Wiz acquisition. The SEC’s Regulation S-P compliance deadline arrived for large firms on February 2, while CISA released a new POEM insider threat framework — even as the agency itself operates with one-third fewer staff. Gartner’s survey data revealed that 57% of employees use personal GenAI accounts for work, with 33% inputting sensitive data into unapproved tools, reinforcing the urgency of shadow AI governance. Meanwhile, Sweden activated its NIS2 transposition, DORA’s first Register of Information submissions came due, and the UK’s ICO published landmark guidance on agentic AI data-protection obligations — signaling that regulators worldwide are closing in on autonomous-system accountability.

This report covers strategic IT security topics for executive leadership. For tactical CPS/ICS vulnerabilities, see the CPS Threat Intelligence report. For ransomware incidents, see the Ransomware Intelligence report.


Week of January 30 – February 6, 2026

Regulatory and Compliance

US Cybersecurity Information Sharing Act expires. The temporary reauthorization of CISA 2015 lapsed on January 30, 2026. The law provided liability protections, FOIA exemptions, and privilege safeguards for organizations sharing cyber-threat indicators with the federal government and private-sector peers. Congress had extended it in November 2025 as a stopgap, but Senator Rand Paul’s opposition — demanding anti-censorship amendments — blocked a permanent renewal. Organizations that relied on CISA 2015’s safe harbor for sharing threat intelligence should consult legal counsel on their continued exposure. (Data Protection Report, Goodwin)

White House previews six-pillar national cyber strategy. National Cyber Director Sean Cairncross outlined six pillars of the forthcoming national cybersecurity strategy: (1) shaping adversary behavior through deterrence, (2) securing and modernizing the federal government, (3) securing critical infrastructure, (4) streamlining the regulatory environment with industry, (5) maintaining dominance in emerging technologies, and (6) closing the cyber workforce gap. The strategy — described as approximately five pages — emphasizes offense, industry partnership, and regulation that “follows function rather than compliance checklists.” A parallel AI security policy framework is being developed jointly with the Office of Science and Technology Policy. (MeriTalk, Nextgov, Federal News Network)

DHS prepares ANCHOR to replace CIPAC. The Department of Homeland Security is finalizing the Alliance of National Councils for Homeland Operational Resilience (ANCHOR), replacing the disbanded Critical Infrastructure Partnership Advisory Council. ANCHOR will provide a more flexible, transparent framework for government-industry collaboration on infrastructure protection, including broader operational-technology (OT) discussions. (CyberScoop, Cybersecurity Dive)

Sweden activates NIS2 Cybersecurity Act. Sweden’s new Cybersecurity Act (2025:1506), transposing the EU NIS2 Directive, entered into force on January 15, 2026. It imposes a whole-entity approach — security obligations extend across entire organizations in 18 covered sectors — with immediate notification requirements for covered operators. Organizations operating in Sweden with 50+ employees and €10M+ turnover should verify their compliance status. (Deloitte Sweden, Mannheimer Swartling)

DORA Register of Information submissions due. The EU’s Digital Operational Resilience Act (DORA) first annual Register of Information (ROI) reporting window opened on January 1, with national regulator deadlines falling through March 21, 2026. Financial institutions must submit detailed documentation of all ICT third-party arrangements, including subcontractors and evidence of ongoing risk mitigation, to their respective national competent authorities. Non-compliant entities face potential fines of up to 2% of total annual worldwide turnover. (Thomas Murray, Lexology)

NYDFS strengthens third-party risk expectations. New York’s Department of Financial Services issued updated Industry Letter guidance clarifying lifecycle-based third-party cybersecurity risk management requirements under 23 NYCRR Part 500. The guidance signals enhanced enforcement scrutiny of vendor relationships, with compliance required in the April 15, 2026 Annual Certification. (Ropes & Gray, Greenberg Traurig)

UK Data Use and Access Act provisions take effect. Section 138 of the UK Data (Use and Access) Act 2025 became active on February 6, creating new criminal offences for creating or requesting AI-generated intimate images without consent, including deprivation-of-device orders. While primarily a personal-harm provision, it establishes precedent for criminal liability tied to AI-generated content. (Clifford Chance, Hunton Andrews Kurth)

CIRCIA rulemaking delayed to May 2026. CISA postponed publication of the final Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) rule, now expected in May 2026. Once effective, it will mandate 72-hour incident notification and 24-hour ransomware-payment disclosure for critical-infrastructure entities. (SecurityWeek)

OMB rescinds Biden-era software attestation mandate. On January 23, OMB Director Russell T. Vought issued Memorandum M-26-05, rescinding prior memoranda (M-22-18 and M-23-16) that required federal agencies to collect the Secure Software Development Attestation Form from software vendors. The new memo characterizes the prior approach as imposing “unproven and burdensome software accounting processes that prioritized compliance over genuine security investments.” Agencies may now choose from a menu of risk-based assurance options — including the Attestation Form, SBOMs, or other NIST SP 800-218 guidance — at their discretion. While reducing mandatory compliance burden, the shift introduces uncertainty: vendors who invested in attestation processes face unclear expectations, and agencies with limited cybersecurity expertise may lack the capacity to design effective risk-based alternatives. Organizations selling software to the federal government should engage with individual agency requirements as they evolve. (Wiley, Dark Reading, SC Media)

SEC Regulation S-P deadline arrives for large firms. The February 2, 2026 compliance deadline for the 2024 amendments to SEC Regulation S-P took effect for larger entities, including registered investment advisors (RIAs) with $1.5 billion or more in AUM. The amendments require enhanced incident response programs, customer notification procedures, and strengthened safeguards for customer information. The SEC’s 2026 examination priorities emphasize cybersecurity governance, identity theft prevention, vendor oversight, and preparedness for AI-driven threats. Examiners will review data-loss prevention, access controls, account management, and incident response — including ransomware preparedness. (SEC, National Law Review)

CISA launches POEM insider threat framework. CISA released a new infographic titled “Assembling a Multi-Disciplinary Insider Threat Management Team,” introducing the Plan, Organize, Execute and Maintain (POEM) framework for critical infrastructure entities and state/local governments. The framework emphasizes cross-functional team composition (HR, legal, security, IT, threat analysts) and coordination with external partners including law enforcement. The timing is notable given CISA’s own workforce reductions and broader concerns about insider threat risks amplified by AI tools. (Infosecurity Magazine, Industrial Cyber)

EU proposes revised Cybersecurity Act with supply chain security framework. On January 20, the European Commission published a proposal for a revised Cybersecurity Act (CSA 2) as part of a broader cybersecurity package. The most significant addition is the EU’s first horizontal framework for ICT supply chain security, introducing mechanisms to identify, restrict, or exclude suppliers considered “high-risk” across 18 critical sectors based on both technical and non-technical factors — including potential influence of third states over certain suppliers. The revised certification framework will allow entities to certify their overall cyber posture (not just individual products) and use certificates to demonstrate NIS2 compliance. The proposal now enters European Parliament and Council negotiations expected throughout 2026. (Help Net Security, Global Policy Watch, Bird & Bird)

US state AI regulation accelerates. New York’s RAISE Act, signed by the governor, requires AI companies to publish protocols for safe AI development and report critical safety incidents. California debuted SB 53, the nation’s first frontier AI safety law aimed at preventing catastrophic harms such as biological weapons or cyberattacks. Meanwhile, Colorado’s SB24-205 — originally set for February 1, 2026 enforcement — was delayed to June 30, 2026 by a special-session bill, giving organizations more time to implement high-risk AI risk management and impact assessment requirements. (MIT Technology Review, Colorado General Assembly, Clark Hill)

New York AI transparency bill advances. The New York Assembly passed A3411 on January 28, requiring generative AI systems to display conspicuous warnings that outputs may be inaccurate. The bill moves to the Senate Internet and Technology Committee. (IAPP, NY State Legislature)

IAPP: Global privacy coverage reaches 80% of world population. The IAPP updated its Global Privacy Law and DPA Directory, finding that 179 of 240 analyzed jurisdictions now have data protection frameworks in place, covering over 6.6 billion people (approximately 80% of the world’s population). The European Commission has proposed significant GDPR reforms nearly 10 years after passage, while US state privacy enforcement activity is headed for a significant uptick with new California privacy measures and comprehensive privacy laws in Indiana and Kentucky taking effect January 1. (IAPP, IAPP)

AI Governance and Agentic AI

Gartner names six top cybersecurity trends for 2026. On February 5, Gartner published its annual top cybersecurity trends, identifying six forces reshaping the field: (1) Agentic AI demands cybersecurity oversight — no-code/low-code platforms and “vibe coding” are driving unmanaged AI agent proliferation and unsecured code; (2) Global regulatory volatility drives cyber resilience — regulators are increasingly holding boards and executives personally liable for compliance failures; (3) Post-quantum computing moves into action plans — organizations must adopt post-quantum cryptography now to counter “harvest now, decrypt later” attacks, with asymmetric cryptography projected unsafe by 2030; (4) IAM must adapt to AI agents — identity registration, credential automation, and policy-driven authorization for machine actors require fundamental rethinking; (5) AI-driven SOC solutions destabilize operational norms — creating staffing pressures and upskilling demands; (6) GenAI will break traditional cybersecurity awareness — a Gartner survey of 175 employees found 57% use personal GenAI accounts for work and 33% input sensitive information into unapproved tools. (Gartner, Digit, ARN)

Microsoft Cyber Pulse: 80% of Fortune 500 use active AI agents. Microsoft’s February 2026 Cyber Pulse report found that more than 80% of Fortune 500 companies now deploy active AI agents built with low-code/no-code platforms — but 29% of employees have turned to unsanctioned agents for work tasks. The report identifies five core governance capabilities organizations need: a centralized agent registry, identity-driven access controls, real-time visualization dashboards, data-protection enforcement, and compliance monitoring. (Microsoft Security Blog)

White House developing AI security policy framework. The Office of the National Cyber Director is crafting an AI security policy framework to integrate security measures into US-led AI technology stacks, described by Cairncross as a “hand-in-glove” effort with the Office of Science and Technology Policy. The framework aligns with the emerging technologies pillar of the forthcoming national cyber strategy and aims to ensure security is built into AI systems foundationally rather than treated as a friction point for innovation. (Nextgov, Washington Technology)

CSO Online: Agentic AI security will get far worse before it improves. A February 3 analysis warns that enterprises currently house 8–10 million non-human identities (NHIs), with projections reaching 20–50 million by year-end. CISO visibility into NHIs stands at approximately 25% and is projected to drop to 12% by December 2026. The article emphasizes that decades of identity-governance neglect for machine identities compound the challenge of managing autonomous AI agents. (CSO Online)

UK ICO publishes agentic AI data-protection guidance. The UK Information Commissioner’s Office released a Tech Futures report on agentic AI in January, warning that organizations remain fully responsible for data-protection compliance even as agents operate with greater autonomy. Key risks include: cascading hallucinations across agent ecosystems, difficulty determining controller/processor roles in multi-vendor environments, and the need for human intervention mechanisms when agent decisions have “legal or similarly significant” impact on individuals. (ICO, Data Protection Report)

AI coding assistants raise data sovereignty concerns. Security researchers highlighted that certain AI-powered coding assistants have been transmitting developers’ source code to servers in China, often without adequate disclosure. Unlike traditional code completion tools, modern AI assistants require substantial context — entire files or project structures — to function, creating significant intellectual property exposure. Under Chinese law, companies face obligations to cooperate with government intelligence gathering, meaning source code processed through Chinese servers could be accessed by state actors regardless of provider privacy policies. (WebProNews)

Shadow AI overtakes external threats as top data-loss vector. Multiple sources this week highlighted that enterprise employees increasingly adopt personal or unvetted AI tools, with ChatGPT alone triggering 410 million data-loss-prevention violations. Industry analysts warn that AI governance has transitioned from a policy discussion to an immediate operational necessity. Organizations are advised to embed AI governance controls into existing data-protection programs, treating model access, prompt integrity, and data lineage as core exposure-management priorities. (Infosecurity Magazine)

Singapore launches world’s first agentic AI governance framework. On January 22, Singapore’s Infocomm Media Development Authority (IMDA) released the Model AI Governance Framework for Agentic AI at the World Economic Forum 2026 — the world’s first governance framework specifically designed for autonomous AI agents. The framework provides guidance across four core dimensions: assessing and bounding risks upfront, making humans meaningfully accountable, implementing technical controls and processes, and enabling end-user responsibility. Compliance is voluntary, but organizations remain legally accountable for their agents’ behaviors. IMDA describes the framework as a “living document” and is seeking industry feedback and case studies. (IMDA, Baker McKenzie, Computer Weekly)

EU AI Act “high-risk” enforcement pending until August 2026. While much of the EU AI Act is already in force, the complex “high-risk AI” classification provisions will not become active until August 2, 2026. The Commission is expected to publish guidance on high-risk AI systems in February 2026, providing clarity on borderline use cases. The delayed EU Code of Practice and expected divergent member-state interpretations will create a compliance divide across Europe, presenting particular challenges for organizations operating in both EU and US jurisdictions. (SecurityWeek, Wilson Sonsini)

Board-Level Risk and CISO Strategy

CISA workforce reduced by one-third. Axios reported on January 27 that more than a third of CISA’s staff departed through layoffs, voluntary buyouts, and reassignments in 2025. The reduced capacity in the nation’s top cyber agency has prompted growing debate over what the federal cyber mission should look like, with direct implications for private-sector organizations that relied on CISA threat-hunting support and advisory services. (Axios)

Personal liability for CISOs enters new phase. SecurityWeek’s “Cyber Insights 2026” series warns that cybersecurity is entering an era where consequences increasingly fall on individuals — CISOs, affirming officials, compliance leaders, and board members — through personal fines, career-ending bans, and potential criminal charges for failures that were historically institutional. Organizations should review D&O insurance coverage and indemnification provisions for security leadership. (SecurityWeek)

The CISO-DPO hybrid model gains traction. GovInfoSecurity highlighted the emerging hybrid CISO-DPO role, driven by converging security and privacy functions, AI tools requiring both technical and legal oversight, and talent shortages pushing organizations to streamline leadership. The model offers unified risk management and faster decision-making but requires careful attention to potential conflicts of interest between security operations and data-protection advocacy. (GovInfoSecurity)

A cybersecurity turning point: defining CISO priorities. Enterprise Times framed 2026 as a “cybersecurity turning point” driven by two interconnected challenges reshaping CISO priorities: supply chain trust verification (organizations are being judged on continuous, verifiable supplier risk visibility beyond point-in-time assessments) and AI-enabled impersonation at scale (deepfake vendor calls, auto-generated procurement documents, and synthetic onboarding requests are becoming recurring challenges). CISOs must fundamentally rethink how they verify trust in extended ecosystems. (Enterprise Times)

Forrester predicts agentic AI will cause a public breach in 2026. Forrester’s 2026 cybersecurity predictions warn that agentic AI will be the root cause of a publicly disclosed breach this year, alongside predictions that governments will tightly regulate critical communication infrastructure and the EU will establish its own known exploited vulnerability database. Forrester forecasts worldwide information security spending to reach $200 billion in 2026, while Gartner projects $240 billion — a significant divergence that reflects different methodological scopes but confirms strong double-digit growth. (Forrester)

Aon: Cyber risk is now a board-level priority, not a CISO silo. Aon’s “Cyber 2026” report emphasizes that AI-driven threats and regulatory pressures have elevated cyber risk to a board agenda item. Key market signals: cyber insurance premiums are stabilizing in some regions but insurers demand stronger governance and controls; coverage quality matters more than price; and organizations should track and report resilience metrics as a business imperative. (Aon)

Global cyber premiums projected at $16.4B in 2026. Swiss Re estimates global cyber insurance premiums will reach approximately $16.4 billion in 2026, up from $15.6 billion in 2025, though S&P Global Ratings forecasts a 15–20% premium increase that could push the total higher. Market profitability remains stable with average loss ratios between 40% and 50%, but growth has slowed to roughly 5% year-over-year since 2022. Underinsured SMEs represent the key growth opportunity as insurers raise scrutiny of enterprise security controls. (Insurance Business, AJG, Heimdal)

Cyber insurance coverage migration underway — AI security riders emerge. ISO has filed absolute AI exclusions for general commercial liability policies effective January 2026, pushing AI-related exposures onto cyber and Tech E&O policies. In parallel, carriers are introducing “AI Security Riders” that condition coverage on documented adversarial red-teaming, model-level risk assessments, and alignment with recognized AI risk management frameworks. More than four in five survey respondents report insurers offered premium reductions for using AI in defense, but AI-related vulnerabilities are simultaneously creating new exclusions and claims disputes. Organizations without phishing-resistant MFA, XDR, and immutable backups may face difficulty obtaining coverage. (Resilience, Delinea, Help Net Security)

Cybersecurity workforce gap persists at 4.8 million. ISC2 data shows the global cybersecurity workforce gap remains at 4.8 million professionals, with AI skills cited as the top need by 41% of respondents, followed by cloud security at 36%. While budget cuts and layoffs have leveled off, nearly 500,000 cybersecurity positions remain unfilled in the US alone. The forthcoming national cyber strategy includes a dedicated workforce pillar modeled on venture capital and incubator approaches. (ISC2, Viva IT)

Cloud Security Posture

Wiz launches WIN Partner Index for cloud security ecosystems. Wiz released its inaugural Integration Network (WIN) Partner Index in January, analyzing real-world adoption of cloud security integrations. The report reinforces that cloud security no longer lives in a single tool — organizations require integrated visibility across their expanding attack surfaces, with attack-surface management (ASM) integrations seeing the highest adoption growth. (Wiz Blog)

Microsoft SDL evolves for AI-powered development. Microsoft published updated Security Development Lifecycle (SDL) practices on February 3, adapting its foundational secure-development framework for AI-powered application development. The guidance addresses runtime security for AI agents, model integrity, and threat modeling for agentic architectures — relevant for organizations building or consuming AI-integrated cloud services. (Microsoft Security Blog)

EU unconditionally approves Google’s $32B Wiz acquisition. On February 10, the European Commission unconditionally approved Google’s $32 billion acquisition of cloud security platform Wiz — the largest transaction in Alphabet’s history. EU antitrust chief Teresa Ribera stated the deal raises no competition concerns, noting Google ranks behind Amazon and Microsoft in cloud infrastructure market share. The deal had already received US clearance in November 2025. The acquisition absorbs a leading multi-cloud security platform — used by governments and critical infrastructure operators — into a single hyperscaler’s ecosystem, with European advocacy groups expressing concerns about potential “soft degradation” of Wiz’s cloud-agnostic monitoring independence. (SecurityWeek, European Commission, TechPolicy.Press)

CSPM market projected to reach $14.5B by 2031. The global Cloud Security Posture Management market is projected to grow from $6.29 billion (2025) to $14.48 billion by 2031 (14.91% CAGR), driven by enterprise adoption of hybrid and multi-cloud architectures. Modern CSPM solutions are evolving beyond simple detection, embedding large language models to automatically produce remediation code, translate natural-language queries into governance policies, and explain security findings in real time. The top five vendors (Wiz, Microsoft, Palo Alto Networks, CrowdStrike, Orca Security) captured approximately 71.8% of revenue in 2025. (GlobeNewsWire, IT Digest)

GovInfoSecurity: Cloud security predictions for CISOs in 2026. Cloud security priorities include pre-wiring “golden” landing zones with identity, encryption, logging, and egress guardrails; using policy-as-code for compliant-by-default configurations; and extending security posture management to cover AI workloads and agent infrastructure alongside traditional cloud assets. (GovInfoSecurity)

Identity, Access Management and Zero Trust

Identity becomes the 2026 security battleground. Multiple industry analyses converge on a single message: identity has replaced the network perimeter as the primary attack vector. Microsoft’s security blog outlines four priorities for 2026 — using AI to automate identity protection at scale, securing AI agents themselves, extending zero trust to non-human identities, and strengthening identity baselines. The consensus is that breaches are no longer about “getting in” through firewalls — they are about “logging in.” (Microsoft Security Blog)

Non-human identity explosion demands new governance. Identity security planning is now shaped by three forces: explosive growth in non-human identities (machine credentials, API keys, service accounts, AI agent tokens), uneven adoption of AI in identity operations, and sustained momentum toward vendor consolidation. Organizations that extend identity security, least privilege, and lifecycle management to every NHI — without slowing development — will have significant competitive advantage. (Help Net Security, The Hacker News)

Omada: State of Identity Governance 2026 reveals executive visibility crisis. Released February 10, Omada’s annual report — based on research with 577 identity, access management, and cybersecurity leaders — found that while organizations report high confidence in their identity security programs, executive reporting remains focused on operational metrics (provisioning speed, audit readiness, incident counts) rather than leading indicators of identity risk. Non-human identities now vastly outnumber human accounts, AI agents are being deployed at scale, yet executive teams lack visibility into the metrics that matter most. The report characterizes this as “not just a reporting gap — it’s a governance crisis with real breach, compliance, and reputational consequences.” (PR Newswire, AI Journal)

Okta launches Identity Threat Protection in early access. Okta released its Identity Threat Protection (ITP) capability in early access during the first week of February, providing a unified suite that automatically detects and blocks identity-based threats before they compromise users or organizations. Okta notes that over 80% of all data breaches are linked to attacks on identity, underscoring the shift to identity as the primary attack vector. The release also addresses “assurance decay” — the growing risk during active sessions where valid tokens become valuable targets for attackers after initial authentication succeeds. (Okta, Okta)

Adaptive identity emerges beyond zero trust. ISACA analysis argues that zero trust alone is insufficient, advocating for a shift from fixed rules toward adaptive, risk-aware identity management that changes in real time. This “adaptive identity” model responds dynamically to user behavior, contextual signals, and risk scoring rather than applying static policies. Gartner’s February 5 trends report reinforces this direction, noting that IAM must fundamentally adapt to manage AI agents — not just human users — requiring new approaches to identity registration, credential automation, and policy-driven authorization for machine actors. (ISACA, Gartner)

SecurityWeek: Zero trust moves from architecture to operational enforcement. The “Cyber Insights 2026” zero trust analysis highlights that 2026 marks the transition from zero-trust planning and architecture to operational enforcement. Organizations are expected to move beyond strategic frameworks into measurable, continuously verified access controls across hybrid environments. True zero trust in 2026 means cryptographically binding identity to every request across its entire lifecycle, not just at entry points. However, interoperability challenges across identity and security platforms continue to limit unified visibility, with comprehensive zero trust adoption realistically projected for 2027–2029. (SecurityWeek, Aembit)

Vendor and Supply Chain Risk

454,000+ malicious open-source packages discovered in 2025. Sonatype’s 2026 State of the Software Supply Chain report found 454,648 new malicious packages across major registries (npm, PyPI, Maven Central, NuGet), up from previous years. Threats have evolved from “spam and stunts” into sustained, industrialized campaigns — many state-sponsored. Over half (56%) were classified as repository abuse, but analysts warn that malicious packages increasingly serve as the first step in larger supply chain intrusions rather than standalone attacks. (Infosecurity Magazine, Sonatype)

Geopolitical vendor risk reshapes procurement. Gartner data shows 50% of non-US CIOs are changing vendor engagement strategies based on regional geopolitical factors. The semiconductor supply chain centered on Taiwan, rare-earth material dependencies, and China’s push for chip self-sufficiency represent active fault lines. Forward-looking organizations are embedding geopolitical intelligence into vendor risk assessments. (Infosecurity Magazine)

Aon partners with SecurityScorecard on vendor risk. Aon announced on February 4 an expansion of its cyber risk capabilities through a partnership with SecurityScorecard, enhancing continuous monitoring and third-party risk assessment for insurance underwriting and enterprise risk programs. (Aon/PR Newswire)

AI-enabled impersonation threatens vendor trust. Enterprise Times warned that deepfake vendor calls, auto-generated procurement documents, and synthetic onboarding requests have become recurring challenges for global enterprises. AI has made impersonation cheaper, faster, and harder to detect — attackers do not need to penetrate networks if they can convincingly pose as a trusted partner. Organizations must move beyond point-in-time assessments toward continuous, verifiable supplier trust verification. (Enterprise Times)

OMB shift fragments federal software supply chain requirements. The rescission of standardized attestation requirements under M-26-05 (see Regulatory section) creates fragmented compliance expectations across federal agencies. While the prior approach imposed uniform requirements, agencies must now independently determine assurance mechanisms — potentially creating a patchwork of SBOM requests, attestation forms, or alternative evidence requirements across different contracts. Industry groups warn this could paradoxically increase compliance costs for vendors selling to multiple agencies, and critics contend the memo “walks back important Secure-by-Design concepts.” Organizations should monitor individual agency guidance and maintain attestation readiness regardless. (SC Media, Morrison Foerster)

AI supply chain visibility gap widens. A VentureBeat analysis found that 62% of security practitioners have no way to determine where LLMs are in use across their organization, highlighting a critical need for AI-specific SBOMs that track model traceability, training data provenance, integration points, and departmental usage patterns. The EU Cyber Resilience Act, which applies fully from late 2026 onward, will mandate SBOM provision and vulnerability handling for products with digital elements, further raising the bar for supply chain transparency. (VentureBeat, Dark Reading)

Industry Surveys and Research

Check Point Cyber Security Report 2026 (published January 28): Organizations experienced an average of 1,968 cyber attacks per week in 2025 — a 70% increase since 2023. A review of approximately 10,000 Model Context Protocol (MCP) servers found security vulnerabilities in 40% of them, highlighting emerging AI infrastructure risk. (Check Point Research)

ISACA: Six cybersecurity trends shaping 2026. AI reshaping offense and defense, cloud-native continuous monitoring, public visibility of data privacy, expanding governance expectations, intelligent tools addressing workforce constraints, and trust emerging as a core competitive differentiator and measure of security maturity. (ISACA)

AV-Comparatives Security Survey 2026 (published February 9): Survey of 1,328 participants across 87 countries found continued reliance on established vendors. Noteworthy: Linux usage has reached macOS-comparable levels among security-aware users, and respondents named Russia, China, the US, and North Korea as the most feared sources of state-sponsored cyber attacks. (AV-Comparatives)

WEF Global Cybersecurity Outlook 2026. The World Economic Forum’s annual report identified AI as the most significant driver of change in cybersecurity — cited by 94% of respondents — while 87% flagged AI-related vulnerabilities as the fastest-growing risk. Notably, the percentage of organizations assessing AI tool security nearly doubled from 37% (2025) to 64% (2026). Geopolitics remains the top factor influencing cyber risk strategies, with 64% of organizations accounting for geopolitically motivated cyberattacks. The report reveals a growing CEO-CISO priorities gap: CEOs now rank cyber-enabled fraud as their top concern, while CISOs remain focused on ransomware and supply chain resilience. Confidence in national cyber preparedness declined, with 31% reporting low confidence (up from 26% last year). (WEF, Industrial Cyber, Help Net Security)

Group-IB: “Fifth Wave” of cybercrime. Deepfake-as-a-service now available for $10/month; dark-web discussions about AI criminal tools increased from ~50,000 annual messages (2020–2022) to ~300,000 annually since 2023. At least three vendors provide unrestricted dark LLMs with 1,000+ customers at $30–$200/month subscriptions. (Infosecurity Magazine)

Gravitee: State of AI Agent Security 2026. This report found that 88% of organizations confirmed or suspected security incidents related to agentic AI deployments, while only 22% of teams treat agents as independent identities (most still rely on shared API keys). Non-human and agentic identities are projected to exceed 45 billion by end of 2026 — more than twelve times the human global workforce — yet only 10% of organizations report having a strategy for managing these autonomous systems. (Gravitee)

Secureframe Cybersecurity Benchmark 2026. Based on data from over 250 companies, the benchmark report found that three quarters of organizations still allocate 15% or less of their annual budget to security and compliance, while more than half of surveyed organizations reported having one or fewer full-time security staff. Global cybersecurity spending is predicted to exceed $520 billion annually in 2026, up from $260 billion in 2021. (Secureframe, Cybersecurity Ventures)

Gartner Top Cybersecurity Trends 2026 (published February 5): Six trends identified — agentic AI oversight, regulatory volatility, post-quantum action, IAM adaptation for AI agents, AI-driven SOC destabilization, and GenAI breaking traditional security awareness. Survey data: 57% of employees use personal GenAI for work, 33% input sensitive data into unapproved tools. Global security spending projected at $240 billion for 2026. (Gartner)

Forrester Predictions 2026: Cybersecurity and Risk. Predicts agentic AI will cause a public breach, EU will launch its own known exploited vulnerability database, and governments will tightly regulate critical communication infrastructure. Global infosec spending forecast at $200 billion for 2026. Quantum security spending projected to exceed 5% of overall IT security budgets. (Forrester)

ISC2 Cybersecurity Workforce Study 2025. The global workforce gap stands at 4.8 million; AI is the top skills need (41%), followed by cloud security (36%). Job satisfaction improved slightly to 68%, but economic pressures persist with hiring freezes leveling off rather than accelerating. (ISC2)

Strategic Recommendations

  1. Assess CISA 2015 exposure immediately. If your organization relied on the Cybersecurity Information Sharing Act’s liability protections for threat-intelligence sharing, engage legal counsel now to evaluate your exposure and consider alternative sharing frameworks (ISACs, bilateral agreements) while Congress determines next steps.

  2. Establish an AI agent registry and governance framework. With 80%+ of Fortune 500 deploying active AI agents and shadow-AI adoption at 29% of employees, implement a centralized agent inventory with identity-driven access controls, data-protection policies, and compliance monitoring before the visibility gap widens further. Audit AI coding assistants for data sovereignty risks, particularly tools that transmit source code to foreign servers.

  3. Accelerate non-human identity governance. The projected explosion from 8–10 million to 20–50 million NHIs per enterprise by year-end demands immediate action. Focus on containment for legacy systems and rigorous lifecycle management (creation, access assignment, rotation, decommissioning) for all new machine and agent identities.

  4. Prepare for overlapping regulatory deadlines. Key dates: SEC Regulation S-P for large firms (February 2, 2026 — now in effect), DORA Register of Information submissions (through March 21, 2026), NYDFS Annual Certification (April 15, 2026), CIRCIA final rule (expected May 2026), EU AI Act high-risk provisions (August 2, 2026). Map your compliance obligations across jurisdictions now to avoid resource contention later.

  5. Review cyber insurance coverage for AI exclusions. ISO’s absolute AI exclusions from general liability policies (effective January 2026) mean AI-related exposures are migrating to cyber and Tech E&O policies. Verify that your coverage reflects your actual AI deployment footprint and that you meet emerging underwriting requirements (phishing-resistant MFA, XDR, immutable backups). Prepare documentation for emerging “AI Security Rider” requirements, including adversarial red-teaming evidence and model-level risk assessments.

  6. Benchmark against Singapore’s agentic AI governance framework. Even if not operating in Singapore, use the IMDA Model AI Governance Framework for Agentic AI as a best-practice reference for structuring internal AI agent governance. Its four-dimension approach (risk bounding, human accountability, technical controls, end-user responsibility) provides an actionable template while the EU AI Act high-risk provisions remain pending until August 2026. Track the EU’s proposed revised Cybersecurity Act (CSA 2) for forthcoming ICT supply chain security obligations — particularly the high-risk supplier exclusion mechanisms.

  7. Begin post-quantum cryptography planning. Gartner’s February 5 report explicitly calls for organizations to move post-quantum computing from awareness into action plans. With asymmetric cryptography projected unsafe by 2030, and “harvest now, decrypt later” attacks already underway, organizations should begin inventorying cryptographic dependencies, identifying high-value data assets vulnerable to future decryption, and evaluating NIST’s post-quantum cryptography standards for transition readiness. Forrester projects quantum security spending will exceed 5% of IT security budgets in 2026.

  8. Monitor OMB M-26-05 agency-level implementation. The rescission of standardized software attestation requirements creates a transitional period of uncertainty. Federal contractors should maintain attestation and SBOM capabilities while engaging with individual agency procurement offices to understand evolving requirements. Non-government organizations should view this as a signal to adopt risk-based approaches to software assurance rather than checkbox compliance — the direction of travel regardless of regulatory specifics.

Sources Referenced

RSS/Primary Sources

  • CSO Online — Agentic AI security, CISO priorities 2026
  • SecurityWeek — Cyber Insights 2026 series (regulations, zero trust, CISO outlook)
  • Infosecurity Magazine — Geopolitics/shadow AI, malicious packages, Group-IB fifth wave, CISA insider threat
  • GovInfoSecurity — CISO-DPO hybrid model, cloud security predictions
  • IAPP — New York AI transparency bill, global privacy coverage
  • Wiz Blog — WIN Partner Index, AI security ecosystem
  • Okta Blog — Identity Threat Protection, identity security fabric
  • Schneier on Security — AI coding assistants data sovereignty, AI governance context
  • MIT Technology Review — US AI regulation landscape, state AI safety laws

Web Search Discoveries