Market Map: AI Governance Vendors

The Market Map: Who Enforces What (and Who Enforces Nothing)

W10 | March 2026

This is The Market Map — a weekly landscape analysis for AI governance practitioners. Each Friday, we map the territory so you can navigate it.

This week: we analyzed 21 governance platforms across Enterprise Architecture (EA), Governance, Risk, and Compliance (GRC), policy-as-code, API governance, and dedicated AI governance. We looked at three questions for each:

  1. How do they store policy? Structured data, prose documents, or code?

  2. Can they enforce it at runtime? Can the platform block or gate an action before execution — or is enforcement manual, integration-dependent, or absent?

  3. Are they pivoting toward AI governance? And if so, from which direction?

The findings confirm what Thursday's analysis mapped from the vendor side: the market is converging on runtime enforcement, but the structural gap between governance registries and execution-time policy remains wide open.

The short version

Of 21 platforms analyzed:

  • 7 have no runtime enforcement. Policy is prose. Compliance is attestation. Enforcement is a human reading a document and deciding whether to act.

  • 5 have partial or indirect enforcement. Monitoring, thresholds, alerting, and integration-dependent hooks — but no deterministic allow/deny decision at execution time.

  • 9 have some form of machine enforcement. But the majority enforce low-level rules (infrastructure, traffic, API design) rather than business-level governance constraints. And several enforce only at design time, not runtime.

Fewer than half can enforce a single rule at runtime. Fewer still can enforce a business policy — the kind a CISO or compliance leader would recognize — at the moment an AI system acts.

How to read the table

The full 21-platform table below maps each platform across four dimensions:

  • Storage Model: How the platform represents governance objects. "Structured" means typed objects with fields and relationships. "Hybrid" means structured records with prose policy content attached. "Code" means policies are inherently machine-readable.

  • Machine-Enforceable: Whether the platform can evaluate and enforce rules without human interpretation. "Yes" means deterministic runtime enforcement. "Partial" means some automation but enforcement depends on integrations or human action. "No" means enforcement is manual.

  • AI Pivot: Whether and how the platform is positioning toward AI governance.

  • Enforcement Scope: What the platform actually enforces — infrastructure rules, data access, API design, or business-level governance constraints.

Tier 1: EA and GRC Platforms

These platforms store governance well. They enforce it poorly.

Platform

Storage

Enforceable

AI Pivot

Enforcement Scope

Ardoq

Structured graph (components, relationships, fields)

No — API enables queries, not runtime gating

Early AI for data quality; no AI governance enforcement

None — analysis and guidance only

LeanIX

Structured fact sheets with enumerated fields

No — reports and lifecycle checks, not runtime controls

GenAI compliance tracking via fact sheets; MCP server for AI agents

None — outputs are for human action

MEGA

Hybrid repository (structured objects + document attachments)

No — workflow-based consistency checks, not policy evaluation

AI for suggestions and impact analysis; no AI governance module

None — semi-automated workflows

BiZZdesign

Hybrid model repository (ArchiMate, BPMN + text descriptions)

No — modeling rules only; governance policies are documented, not executable

No dedicated AI governance

None — modeling consistency only

ServiceNow GRC

Hybrid (structured policy records + rich-text content)

Partial — automated tasking and control indicators; GRC data can drive external enforcement via API

AI for risk insights; Autonomous Workforce agents embed configurable governance

Workflow-level enforcement; not a runtime policy engine

RSA Archer

Hybrid (hierarchical records decomposed from documents + prose)

No — workflows and attestation; policies remain human-readable

No AI governance product

None — cataloging and attestation

IBM OpenPages

Hybrid (structured risk/control objects + document attachments)

Partial — connectors and automated control testing; core policies require human interpretation

Watson AI assistance; AI governance via external frameworks

Limited — integration-dependent

AuditBoard

Hybrid (GRC records + FairNow AI registries post-acquisition)

No — assessments, risk scoring, evidence collection; no runtime gating

FairNow acquisition adds AI registries and dynamic risk assessments

None — assessment and reporting

The pattern: Strong catalogs. Structured metadata. Prose policy content. Enforcement is attestation, workflow routing, or "someone reads the policy and decides." These platforms know what the rules are. They cannot make the rules execute.

Tier 2: Policy-as-Code and API Governance

These platforms enforce well. They lack business context.

Platform

Storage

Enforceable

AI Pivot

Enforcement Scope

OPA / Rego

Code (Rego policies over structured JSON input)

Yes — runtime allow/deny decisions for any integrated service

Used in ML pipelines; AI pivot via use cases, not repositioning

Infrastructure, authorization, data access — not business policy

HashiCorp Sentinel

Code (Sentinel language over Terraform/infra state)

Yes — blocks non-compliant infrastructure changes before apply

Secures AI infrastructure; not marketed as AI governance

Infrastructure policy only

Styra DAS

Code (managed OPA/Rego with versioning and metadata)

Yes — centralized policy deployment with automated enforcement via OPA

Markets AI security use cases on same foundation

Infrastructure and authorization

AWS Cedar

Code (formal policy language over JSON-like data)

Yes — fine-grained runtime authorization enforcement

Applied to AI via access control; no distinct AI governance product

Authorization — not broader governance

Kong

Structured config (services, routes, plugins)

Yes — runtime enforcement via gateway plugins

AI governance via API gateway integrations; no separate module

Traffic, authentication, rate limiting — low-level

Stoplight

Hybrid (OpenAPI specs + governance rulesets as JSON/YAML)

Yes — design-time linting and blocking of non-compliant APIs

No AI governance pivot

API design compliance — design-time only

Postman

Hybrid (collections and governance rules as structured JSON)

Yes — design-time enforcement of naming, auth, and security rules

AI features for API design; no AI governance

API design compliance — design-time only

The pattern: Policies are code. Enforcement is real — deterministic, automated, in the execution or design path. But the rules these platforms enforce are low-level: infrastructure configuration, API design standards, traffic policies, authorization. They do not speak the language of business governance — regulatory obligations, risk thresholds, clinical review requirements, data use restrictions. The enforcement engine works. It has nothing to enforce at the business layer.

Tier 3: AI Governance Vendors

These platforms have the business context. Enforcement is emerging but incomplete.

Platform

Storage

Enforceable

AI Pivot

Enforcement Scope

Credo AI

Structured registries (models, datasets, risks, controls as typed objects)

Partial — maps use cases to controls and metrics; enforcement via ML pipeline integrations

Native AI governance platform

Risk assessment, compliance mapping — enforcement is integration-dependent

Holistic AI

Hybrid (structured catalogs + narrative reports)

Partial — monitoring and risk scoring; markets "automatic enforcement" via API/SDK

Native AI governance platform

Assessment, monitoring, emerging enforcement across models and agents

Arthur AI

Hybrid (structured metrics and model metadata + narrative reports)

Partial — monitoring thresholds and guardrails; enforcement via CI/CD integration

MLOps-to-AI-governance pivot

Monitoring and alerting — indirect enforcement

Fiddler AI

Hybrid (structured metrics and explainability artifacts + dashboards)

Partial — threshold-based alerting and integrations

Explainability-to-AI-governance pivot

Monitoring — enforcement is indirect

The pattern: These platforms understand governance as a domain. They store models, risks, controls, and policies as structured objects — not just documents. But enforcement remains integration-dependent. The platform can define a rule and detect a violation. Whether it can block an action at execution time depends on how tightly it connects to the infrastructure where the action occurs. The registry is strong. The enforcement path has gaps.

Cross-cutting: OneTrust Data Use Governance

OneTrust deserves separate treatment because it is the closest incumbent to bridging the governance-to-enforcement gap.

Data Use Governance converts policy documents into machine-readable labels and pushes enforcement logic — column masking, row filtering — into data platforms like Snowflake and Databricks. Policies become code. Enforcement happens at the data layer. Audit evidence flows from the platform's native logs.

What makes it notable: OneTrust attacks from the data platform side with genuine policy-to-code conversion and real runtime enforcement. This is structurally aligned with where the market needs to go.

Where it stops: Enforcement scope is data access controls. The full range of agent actions and AI-driven decisions enterprises need to govern — clinical recommendations, hiring decisions, content generation, autonomous workflow execution — sits outside the data access layer. OneTrust governs what data AI can see. It does not govern what AI does with it.

The structural gap

Across all 21 platforms, the same gap appears:

  • EA and GRC platforms store structured metadata about policies, but policy logic remains prose. The rule is something a human reads.

  • Policy-as-code engines enforce deterministic rules at runtime, but lack business context. The rule is something a machine evaluates — at the infrastructure layer only.

  • AI governance vendors have registries and assessments with real business semantics, but enforcement is integration-dependent. The rule is structured but not executable.

  • Runtime security vendors can block actions inline, but enforce low-level policies — not business constraints.

Nobody takes structured business constraints from a governance registry and transpiles them into deterministic policy-as-code for execution-time enforcement. The governance catalog and the policy engine remain disconnected.

This is the bridge the market has not built.

What to watch

This table is a snapshot. The landscape is moving. Three developments will signal whether the gap is closing:

  • Security vendors ingesting governance registries. If VAST, Lasso, or AccuKnox start consuming structured policy objects — not just infrastructure rules — the enforcement layer gains business context.

  • AI governance vendors shipping inline enforcement. If Holistic AI or Credo AI move from integration-dependent hooks to deterministic policy evaluation in the execution path, the registry gains runtime authority.

  • GRC platforms adopting policy-as-code. If ServiceNow, AuditBoard, or IBM OpenPages connect their governance records to OPA, Cedar, or equivalent engines, the catalog gains enforcement capability.

We will update this table as vendors move. Subscribe to The Market Map to receive updates when the landscape shifts — who moved, who didn't, and what it means for your governance stack.


Sources

EA and GRC Platforms

  • Ardoq Generic Regulatory Compliance Metamodel — Ardoq Help

  • LeanIX Governance Factsheet for Compliance Assessments — LeanIX Community

  • ServiceNow GRC Policy & Compliance Management — ServiceNow Docs

  • ServiceNow AI Governance for Autonomous Workforce — TechTarget

  • AuditBoard Acquisition of FairNow — AuditBoard

Policy-as-Code and API Governance

  • Policy as Code Explained — HashiCorp

  • Governance as Code — Spacelift

  • Governing Enterprise Data & AI with Policy-as-Code — Ethyca

AI Governance Vendors

Cross-cutting

  • OneTrust Data Use Governance — OneTrust

Runtime Security and Enforcement

The Market Map Runtime Enforcement Policy-as-Code Enterprise Architecture