Executive Summary
- AI security risk sits in data supply chains and model lifecycle governance, not just tooling, as emphasized by the NIST AI Risk Management Framework.
- Attack surfaces include prompt injection, data poisoning, model theft, and insecure integrations, codified in the OWASP Top 10 for LLM Applications and MITRE ATLAS.
- Boards should demand assurance evidence such as pre-deployment tests and traceable provenance, aligning with OpenAI’s Preparedness Framework and Anthropic’s Responsible Scaling Policy.
- Regulatory pressure from the EU AI Act and U.S. policy guidance via the Executive Order on AI elevates governance to a strategic imperative.
- Capital should shift toward data quality, provenance, and model assurance, supported by industry frameworks from NIST and ENISA.
Leaders Misframe AI Security as a Tool Problem, Not a System Risk
Most leadership teams still approach AI security as a tooling purchase—red-teaming and input filtering—rather than a system-level risk discipline covering data, models, and the integrations that bind them. The NIST AI Risk Management Framework is explicit that AI risk is sociotechnical, spanning people, processes, and technology. That means threat modeling must extend beyond model prompts to the entire ML pipeline, third-party connectors, and identity boundaries that are often overlooked.
Vulnerabilities are multifaceted: prompt injection, data poisoning, model theft, insecure plugin integrations, and output misuse. These are documented across the OWASP Top 10 for LLM Applications and mapped to attacker behavior via MITRE ATLAS, which catalogues adversary tactics against ML systems. According to Satya Nadella, CEO of Microsoft, "Safety and security are foundational to how we build and deploy AI" (company blog...