The AI governance market is converging on runtime enforcement. What nobody agrees on is who should build it or where it should live.
Over the past month, three distinct categories of vendor have started moving toward the same capability — the ability to enforce governance rules at the moment an AI system acts, not before deployment and not after the fact. They are arriving from different directions, carrying different assumptions, and leaving different gaps behind them.
Understanding where each group is strong and where each group is blind is the difference between choosing a governance stack and inheriting one.
Direction 1: Security vendors moving up
The fastest movement toward runtime enforcement is coming from AI security and infrastructure vendors. Lasso Security, AccuKnox, and VAST have all shipped or announced capabilities sitting directly in the request/response path between agents and the systems they interact with.
VAST's PolicyEngine is the clearest signal. Hyperframe Research describes it as an inline enforcement layer mediating how agents interact with models, memory, and data, with all activity written to tamper-proof audit logs. This is not monitoring. It is interception — the ability to evaluate and block an action before execution.
Lasso Security makes a similar architectural claim: inline enforcement in the request/response path, with explicit ties to AI governance and compliance reporting. AccuKnox goes further, combining Kubernetes-native enforcement (eBPF, KubeArmor policies) with continuous compliance frameworks and Governance, Risk, and Compliance (GRC)-grade evidence.
What they bring: These vendors already sit where enforcement happens. They can block, modify, or gate actions before execution. They produce audit-grade logs. And they are fast — security tooling is built for latency-sensitive environments.
What they lack: Business context. The policies these platforms enforce are low-level: traffic rules, authentication checks, infrastructure constraints, prompt firewalls. They can block a request, but they cannot evaluate whether a business policy — "clinical AI recommendations require practitioner review before action" — has been satisfied. The enforcement engine exists. The business semantics do not.
Direction 2: AI governance vendors moving down
Dedicated AI governance platforms are making the opposite move. Holistic AI and Credo AI, both built on structured registries of models, datasets, risks, and controls, are pushing their messaging — and in some cases their capabilities — toward runtime enforcement.
Holistic AI's enforce page now markets automatic enforcement across models, agents, APIs, and workflows, with real-time policy adherence tracking, violation detection, and exportable evidence. The language is explicit: AI policies "aren't enforced at runtime" in most organizations, and Holistic AI positions itself as the fix.
Credo AI markets runtime action enforcement on its site and provides structured governance registries where models, datasets, and controls are first-class objects with fields and mappings, not prose attachments.
What they bring: Business context. These platforms understand governance as a domain — risk taxonomies, regulatory mappings, control frameworks, model registries. They store the authoritative policy canon in structured form.
What they lack: Deep runtime integration. Enforcement depends on API/SDK integration points with downstream systems. The platform can define the rule and detect a violation, but whether it can block an action at execution time depends on how tightly it connects to the infrastructure where the action occurs. The registry is strong. The enforcement path is integration-dependent.
Direction 3: Platform incumbents embedding sideways
The third movement comes from large workflow and automation platforms adding governance capabilities directly into their agent runtimes.
ServiceNow's Autonomous Workforce announcement is the clearest example. New AI specialist agents operate inside the Now platform under configurable governance policies — escalation thresholds, behavioral rules, and hand-off triggers for when agents recognize the limits of their competence. Governance is not a separate product bolted on. It is a property of the agent runtime.
Microsoft is making a parallel move, embedding Entra ID role-based access control directly into Copilot Studio agents. Identity and authorization are becoming the AI control plane — not through a governance product, but through the platform's native security model.
What they bring: Distribution and integration depth. When governance is a platform feature rather than a third-party overlay, adoption friction drops to near zero. ServiceNow does not need to convince customers to integrate a separate governance tool. The governance is already in the runtime their agents use.
What they lack: Governance depth. Configurable escalation thresholds and hand-off rules are governance, but they are not policy-as-code. ServiceNow's public materials describe behavioral policies without specifying formal allow/deny decision logic or deterministic rule evaluation. The governance is real but coarse — closer to configurable guardrails than a full policy engine. And platform-embedded governance only covers what runs on the platform. Anything outside the ecosystem is ungoverned.
The gap in the middle
Three directions. One destination. And a gap none of them fill.
Security vendors enforce at runtime but lack business semantics. AI governance vendors have the semantics but lack deep runtime enforcement. Platform incumbents embed governance natively but only within their own ecosystems and without policy-as-code precision.
The structural gap is the bridge between them: taking structured business constraints from a governance registry — the kind of high-level obligations CISOs and compliance leaders recognize — and transpiling them into deterministic, machine-evaluable policy for execution-time enforcement.
Nobody ships this bridge today. The closest incumbent is OneTrust, whose Data Use Governance product converts policy documents into machine-readable labels pushed into Snowflake and Databricks for enforcement. But OneTrust attacks from the data platform side, not from business semantics, and its enforcement scope is data access controls — not the full range of agent actions and AI-driven decisions enterprises need to govern.
What to watch
The convergence is accelerating. Terms like "runtime decision governance" and "ALLOW/PAUSE/DENY" are appearing independently across security vendors, thought leadership, and academic research — without coordination. When multiple parties coin the same language simultaneously, the market is naming a real problem.
Three things to watch in the coming weeks:
Security vendors adding business policy layers. If VAST, Lasso, or AccuKnox start ingesting governance registries or regulatory frameworks — not just infrastructure rules — the gap narrows from below.
AI governance vendors shipping native enforcement SDKs. If Holistic AI or Credo AI move from "enforcement via integration" to deterministic inline policy evaluation, the gap narrows from above.
Platform incumbents exposing policy-as-code interfaces. If ServiceNow or Microsoft publish formal rule languages for their agent governance — not just configuration options — the gap narrows from the side.
Until one of these happens, the three roads lead toward the same destination but do not yet meet.
Today, we published a full Market Map: 21 governance platforms analyzed by storage model, enforcement capability, and AI pivot. Keep an eye out for The Market Map updates when the landscape shifts.
Sources
VAST PolicyEngine analysis — Hyperframe Research
AI Runtime Security — Lasso Security
Runtime AI Governance Security Platforms — AccuKnox
Govern Every AI Model and Agent Automatically — Holistic AI
Credo AI Responsible AI Governance Platform — Credo AI
ServiceNow AI Governance for Autonomous Workforce — TechTarget
Authorization and Identity Governance Inside AI Agents — Microsoft Tech Community
OneTrust Data Use Governance — OneTrust
How to Govern AI Decisions at Runtime — ThinkingOperatingSystem.com