GRC Profile
EU AI Act mapping, NIST alignment, control crosswalks, maturity model, and evidence generation.
For compliance officers, risk managers, auditors, legal/privacy teams, Data Protection Officers (DPOs), and anyone responsible for demonstrating that agentic systems are governed in accordance with regulatory requirements.
The key question this profile answers: How do I prove to a regulator or auditor that my agentic systems are governed?
The Compliance Challenge
Regulatory frameworks were designed for traditional AI — static models making predictions with human decision-makers in the loop. Agentic AI breaks these assumptions. Agents select tools, take actions, delegate to other agents, modify their own behavior, and operate at speeds that exceed human oversight capacity.
The compliance challenge for agentic systems has three dimensions:
-
Accountability is distributed. When an agent delegates to another agent, which made a tool call that produced a side effect — who is accountable? The deployer? The model provider? The tool provider? Regulatory frameworks demand clear accountability chains that agentic architectures often lack.
-
Evidence must be structural, not anecdotal. "We reviewed 50 outputs and they looked fine" is not compliance evidence. Regulators need: continuous logging, traceable decision provenance, testable policy rules, and reproducible governance behavior.
-
The regulatory landscape is converging but not aligned. The EU AI Act, NIST AI RMF, Singapore IMDA, CSA frameworks, and ISO standards all address agentic governance from different angles with different vocabularies. Organizations operating globally need a unified architecture that satisfies multiple regulatory frameworks simultaneously.
AGF addresses all three: clear accountability via Identity & Attribution (#14) and Provenance Chains (#6), structural evidence via Event-Driven Observability (#10) and auditable gates, and multi-framework alignment through explicit regulatory mappings.
EU AI Act Alignment
The EU AI Act provides the most comprehensive regulatory framework for AI systems. AGF maps to its requirements at the article level.
Applicability timing (as of March 2026): Prohibitions and GPAI obligations have applied since 2 February and 2 August 2025 respectively. High-risk system obligations (Art. 6, 9–15) become applicable 2 August 2026. Organizations should begin preparation now.
High-Risk System Requirements (Articles 6, 9–15)
| Article | Requirement | AGF Mapping | Evidence Produced |
|---|---|---|---|
| Art. 6 — Classification | Rules for classifying AI systems as high-risk | Risk classification → ring activation intensity. High-risk = full ring stack activation. | Classification decision record, ring activation policy |
| Art. 9 — Risk management | Risk management system throughout lifecycle | Three-level security model + risk-based ring activation | Security architecture documentation, ring configuration records |
| Art. 10 — Data governance | Data collection, preparation, bias examination | Data Governance & Confidentiality (#17) — classification, lineage, consent, retention | Data classification records, consent logs, lineage traces |
| Art. 11 — Technical documentation | Complete documentation enabling conformity assessment | Provenance Chains (#6) + versioned control-plane state | Full provenance chain for any output, configuration version history |
| Art. 12 — Record-keeping | Automatic recording of events relevant to risk identification and monitoring | Event-Driven Observability (#10) + Provenance Chains (#6) | Structured event logs, ring boundary events, gate decision records |
| Art. 13 — Transparency | Sufficient transparency for deployers to interpret output and detect anomalies | Identity & Attribution (#14) — full identity context on every action. Provenance chains, confidence signals, gate decision explanations. | Identity context records, decision explanation artifacts |
| Art. 14 — Human oversight | Ability to understand capabilities/limitations, monitor operation, intervene/interrupt/halt | Governance Gates (#8) with human interface requirements — evidence presentation, counterfactual framing, rubber-stamping detection. | Gate decision logs, human intervention records, halt/containment records |
| Art. 15 — Accuracy, robustness, cybersecurity | Resilience against data poisoning, adversarial examples, model manipulation, supply chain exploitation | Adversarial Robustness (#15) + Security Architecture + Evaluation & Assurance (#18) | Security test results, red team reports, evaluation suite outcomes |
| Art. 50 — Transparency obligations | Users informed they are interacting with AI; disclosure for AI-generated content | Identity & Attribution (#14) — AI-system identification. Provenance Chains (#6) for content provenance. | AI identification disclosure records, content provenance records |
General-Purpose AI Model Obligations (Articles 51–56)
Most agentic systems are built on general-purpose AI models (GPT, Claude, Gemini, etc.). GPAI obligations have applied since 2 August 2025.
| Article | Requirement | AGF Relevance |
|---|---|---|
| Art. 53 — GPAI provider obligations | Documentation, adversarial testing, copyright compliance | AGF does not address GPAI provider obligations directly. Verify provider compliance and maintain evidence of provider documentation. |
| Art. 55 — Systemic risk obligations | Additional obligations for GPAI models with systemic risk (compute >10²⁵ FLOPs) | AGF's Evaluation & Assurance (#18) and Security Architecture support but do not fully satisfy these obligations. |
Coverage boundary: AGF governs how organizations deploy and operate agentic systems built on GPAI models. Organizations have a dual compliance obligation: GPAI model compliance (provider responsibility) AND high-risk system compliance (deployer responsibility) when the agent system qualifies as high-risk.
What AGF does NOT cover: Art. 43 (conformity assessment procedures), Art. 73 (serious incident reporting), Art. 72 (post-market monitoring), Art. 26 (log retention ≥6 months). These are organizational/regulatory processes. AGF provides the technical evidence substrate these processes require.
Human Oversight: An Honest Constraint
Art. 14 requires effective human oversight. AGF's honest position: oversight is necessary but its efficacy degrades as the capability gap between overseer and system increases (Engels et al., NeurIPS 2025). AGF addresses this by investing in structural guarantees — rings, verification layers, automated policy enforcement — that function whether or not the human overseer catches every issue.
For compliance purposes: document both the human oversight mechanisms (gates, review interfaces, override capabilities) AND the structural safeguards (automated verification, containment, policy enforcement) that supplement human oversight.
NIST AI RMF Alignment
AGF primitives constitute an agentic AI RMF-style profile — runtime mechanisms that partially address aspects of NIST AI RMF functions in the agentic context.
| NIST Function | Scope (NIST) | AGF Covers | AGF Does NOT Cover |
|---|---|---|---|
| GOVERN | Establish and maintain organizational AI risk governance | Ring 2 runtime governance: policy evaluation, gate decisions, delegation authority, Policy as Code (#9) | Organizational risk culture, legal compliance processes, external stakeholder engagement, DEI governance |
| MAP | Context framing, risk identification, categorization | Risk classification + ring activation intensity. Risk tier decision tree. | Broader stakeholder analysis, societal impact assessment beyond runtime classification |
| MEASURE | Quantify, monitor, assess AI risks | Evaluation & Assurance (#18) for pre-deployment. Ring 1 verification + Event-Driven Observability (#10) for runtime. | Organizational risk quantification, bias measurement, fairness metrics beyond runtime |
| MANAGE | Allocate resources, plan responses, manage risks | Trust Ladders (#11) + Bounded Agency (#7) for runtime risk management. Error Handling (#13) for recovery. | Organizational response planning, stakeholder communication, appeal mechanisms, decommissioning |
NIST IR 8596 (Cybersecurity AI Profile)
Maps AI agent security onto NIST CSF 2.0's six functions:
| IR 8596 Focus Area | AGF Mapping |
|---|---|
| Securing AI Systems | Security Fabric + Identity & Attribution (#14) |
| AI-Enabled Cyber Defense | Security Intelligence + Security Response Bus with human oversight |
| Thwarting AI-Enabled Attacks | Adversarial Robustness (#15) |
Key alignment: IR 8596 treats AI agents as security-relevant entities requiring unique identity and agent-specific security controls — not just applications. This validates AGF's position that agent identity must be first-class.
Singapore IMDA Alignment
The IMDA Model AI Governance Framework for Agentic AI (January 2026) — the world's first government-published governance framework specifically for agentic AI.
| IMDA Dimension | Description | AGF Mapping |
|---|---|---|
| 1. Risk Assessment & Bounding | Restrict tool access, sandbox environments, fine-grained permissions | Bounded Agency (#7) + Security Fabric + Agent Environment Governance (#19) workspace scoping |
| 2. Accountability & Human Oversight | Defined roles, HITL for high-stakes/irreversible actions, automation bias safeguards | Governance Gates (#8) + human interface requirements (evidence presentation, rubber-stamping detection) |
| 3. Technical Controls & Testing | Output accuracy, tool usage validation, policy compliance, gradual rollout | Evaluation & Assurance (#18) + Ring 1 verification |
| 4. End-User Responsibility | User training, transparency on agent permissions, active stewardship | Identity & Attribution (#14) transparency requirements |
IMDA explicitly includes "operational environments" as a governance dimension — directly validating Agent Environment Governance (#19).
CSA MAESTRO Alignment
The MAESTRO 7-layer threat model mapped to AGF primitives:
| MAESTRO Layer | AGF Primary Primitives | Ring Mapping |
|---|---|---|
| L1: Foundation Models | Adversarial Robustness (#15), Evaluation & Assurance (#18) | Ring 0 |
| L2: Data Operations | Data Governance (#17), Memory-Augmented Reasoning (#12) | Ring 0 + Fabric |
| L3: Agent Frameworks | Composability Interface, Bounded Agency (#7), Policy as Code (#9), Agent Environment Governance (#19) | Ring 1 + Ring 2 |
| L4: Deployment Infrastructure | Identity & Attribution (#14), Transaction Control (#16), Agent Environment Governance (#19) | Security Fabric |
| L5: Evaluation & Observability | Event-Driven Observability (#10), Validation Loops (#2), Evaluation & Assurance (#18) | Ring 1 + Ring 3 |
| L6: Security & Compliance | Governance Gates (#8), Policy as Code (#9), Trust Ladders (#11) | Ring 2 |
| L7: Agent Ecosystem | Multi-Agent Coordination, Cross-System Trust, DELEGATE signal | Ring 2 + Cross-cutting |
Governance Evidence: What Each Primitive Produces
For auditors: every AGF primitive produces specific, auditable artifacts.
| Primitive | Evidence Artifact | Regulatory Mapping |
|---|---|---|
| #1 Separation of Producer/Verifier | Verification decision records (pass/revise/fail per output) | Art. 15 (accuracy), NIST MEASURE |
| #6 Provenance Chains | Full decision history for any output — every agent, model, decision, input, context | Art. 11 (documentation), Art. 12 (record-keeping) |
| #7 Bounded Agency | Scope definition records, boundary enforcement logs, escalation records | Art. 9 (risk management), IMDA Dim. 1 |
| #8 Governance Gates | Gate trigger records, evidence packages, human decision records, override logs | Art. 14 (human oversight), IMDA Dim. 2 |
| #9 Policy as Code | Versioned policy rules, policy test results, policy change audit trail | Art. 9, NIST GOVERN |
| #10 Event-Driven Observability | Structured event logs from all rings, correlation records | Art. 12 (record-keeping), NIST MEASURE |
| #11 Trust Ladders | Trust level history, promotion/demotion records, calibration justifications | NIST MANAGE, CSA ATF |
| #14 Identity & Attribution | Agent identity records, delegation chains, authentication logs | Art. 13 (transparency), Art. 50 |
| #17 Data Governance | Data classification records, consent logs, PII handling logs, retention/deletion records | Art. 10 (data governance), GDPR |
| #18 Evaluation & Assurance | Pre-deployment test results, red team reports, regression suite outcomes | Art. 15 (accuracy/robustness), NIST MEASURE |
| #19 Agent Environment Governance | Environment composition records, instruction version history, tool provisioning logs | IMDA Dim. 1 (operational environments) |
Governance Gates and Human Oversight
Governance Gates (#8) are the primary mechanism for human-in-the-loop oversight in AGF.
How Gates Work
- Execution reaches a defined decision point (Ring 2 determines a gate is required)
- Execution pauses — the output, context, and evidence are frozen
- A human reviewer sees: the output, the provenance chain, the policy evaluation, and the risk classification
- The reviewer decides: APPROVE, REJECT, MODIFY, DEFER, or ESCALATE
- The decision is recorded with full provenance (who decided, when, what evidence was presented, what they decided)
Human Interface Requirements
AGF specifies requirements for how gates present information to human reviewers:
- Evidence presentation — the reviewer sees structured evidence, not raw data
- Counterfactual framing — "what would happen if you approve vs. reject"
- Rubber-stamping detection — if a reviewer approves too quickly or too uniformly, Intelligence flags it
- Timeout behavior — fail-closed by default (if no decision within the window, execution halts)
- Cognitive load management — batch approval rules for routine decisions, escalation for novel ones
- Cooling-off periods — rate limits on approval requests to prevent fatigue exploitation
Policy as Code
Governance rules are code — versioned, tested, deployed, and auditable. This is the foundation of structural compliance.
What Policy as Code gives you:
- Auditability: Every policy rule is a versioned artifact. Regulators can inspect what rules governed a decision at any point in time.
- Testability: Policy rules have unit tests. You can verify "this rule would have blocked X" before deploying it.
- Change audit trail: Every policy change is a commit with author, timestamp, and justification.
- Reproducibility: Same policy + same input = same decision. No human judgment variation.
What it does NOT give you: Correctness of the policy itself. Well-formed Policy as Code can still encode incorrect, biased, or incomplete governance rules. Human review of policy content — not just policy mechanics — remains essential.
Governance Maturity Model
| Level | Name | Characteristics |
|---|---|---|
| 1 — Ad Hoc | No structured governance | Manual review, no logging, no policy enforcement |
| 2 — Reactive | Basic controls | Bounded Agency (#7), Identity (#14), basic event logging |
| 3 — Managed | Structured governance | Full Ring 1 verification, Policy as Code (#9), Governance Gates (#8), Provenance Chains (#6) |
| 4 — Measured | Data-driven governance | Trust Ladders (#11) calibrated from empirical data, behavioral baselines, anomaly detection |
| 5 — Optimizing | Self-improving governance | Self-Improving Cycles (#3), Environment Optimization Loop, Ring 3 driving policy updates |
For EU AI Act high-risk compliance: Level 3 is the minimum. Levels 4–5 represent best practice for high-stakes domains.
Compliance Assessment Checklist
Accountability:
- Agent identity traceable to deployment configuration (Identity & Attribution #14)
- Delegation chains bounded and auditable
- Human decision records for all gate resolutions
Evidence infrastructure:
- Structured event log from all ring boundaries (#10)
- Provenance chain complete for every material output (#6)
- Policy rules versioned and in source control (#9)
- Pre-deployment evaluation suite documented (#18)
EU AI Act (if high-risk):
- Risk classification documented and mapped to ring activation policy (Art. 6)
- Data governance records — classification, consent, lineage (Art. 10)
- Human oversight mechanisms documented (Art. 14)
- Technical documentation sufficient for conformity assessment (Art. 11)
NIST AI RMF:
- Runtime governance coverage documented (GOVERN)
- Risk classification and ring activation policy (MAP)
- Evaluation suite with quantitative metrics (MEASURE)
- Trust Ladders and escalation procedures (MANAGE)
Related: Security Profile — threat defense architecture. Observability Profile — evidence collection and operational monitoring. Platform Profile — deployment infrastructure underpinning the evidence substrate.