Investors crowd into AI security as enterprise risk heats up

Capital is rushing into AI security as enterprises harden machine learning pipelines and generative AI apps. From model integrity to data lineage and governance, the category is maturing fast under regulatory pressure and growing threat activity.

Published: November 3, 2025 By Sarah Chen Category: AI Security
Investors crowd into AI security as enterprise risk heats up

The new AI security moment

In the AI Security sector, As generative AI moves from pilots to production across finance, healthcare, and the public sector, a distinct investment theme has emerged: AI security. Investors who sat out broader cybersecurity in 2023’s downcycle are returning with targeted bets on tooling that protects models, data, and AI-enabled applications. Deal flow remains selective, but the bar for product-market fit is clearer: buyers want controls that slot into existing security stacks without slowing model deployment.

Market sizing is catching up to that enterprise urgency. The AI in cybersecurity market is projected to grow from $22.4 billion in 2023 to $60.6 billion by 2028, according to industry analysts, reflecting a 21.9% CAGR according to MarketsandMarkets. While market-taxonomy debates continue—AI for security versus security for AI—the common driver is the same: automated systems increase both attack surface and defensive leverage, expanding total addressable spend.

On the buy side, budgets are being reshaped by governance frameworks designed to make AI safer to scale. Gartner has formalized AI Trust, Risk and Security Management (AI TRiSM) as a board-level priority, encouraging controls for model lineage, data protection, and runtime monitoring that map to enterprise risk appetites in Gartner’s AI TRiSM guidance. That language is increasingly reflected in RFPs, bringing clarity to where startups can wedge in—and how incumbents will respond.

Where the checks are going

The most active sub-segments span three layers. First, model integrity and adversarial robustness—tools to test, harden, and continuously monitor models against prompt injection, data poisoning, and model theft. Second, AI supply chain security—bill-of-materials (AI-BoM) and provenance systems that track datasets, fine-tuning artifacts, and third-party components across the ML lifecycle. Third, application-layer controls—runtime guardrails for LLM apps, policy enforcement, and red teaming frameworks.

Company names now recur in diligence memos: teams focused on LLM red teaming and guardrails; platforms for AI governance and policy management; and specialists in model behavior risk analytics. The demand signal is clearest among highly regulated buyers who must prove reliable, repeatable model behavior under audit. Security leaders report that advanced attackers are already weaponizing AI to scale reconnaissance and social engineering, a trend that is accelerating procurement cycles as highlighted in the World Economic Forum’s Global Cybersecurity Outlook 2024.

...

Read the full article at AI BUSINESS 2.0 NEWS