AGF

GRC Profile

EU AI Act mapping, NIST alignment, control crosswalks, maturity model, and evidence generation.

For compliance officers, risk managers, auditors, legal/privacy teams, Data Protection Officers (DPOs), and anyone responsible for demonstrating that agentic systems are governed in accordance with regulatory requirements.

The key question this profile answers: How do I prove to a regulator or auditor that my agentic systems are governed?

The Compliance Challenge

Regulatory frameworks were designed for traditional AI — static models making predictions with human decision-makers in the loop. Agentic AI breaks these assumptions. Agents select tools, take actions, delegate to other agents, modify their own behavior, and operate at speeds that exceed human oversight capacity.

The compliance challenge for agentic systems has three dimensions:

  1. Accountability is distributed. When an agent delegates to another agent, which made a tool call that produced a side effect — who is accountable? The deployer? The model provider? The tool provider? Regulatory frameworks demand clear accountability chains that agentic architectures often lack.

  2. Evidence must be structural, not anecdotal. "We reviewed 50 outputs and they looked fine" is not compliance evidence. Regulators need: continuous logging, traceable decision provenance, testable policy rules, and reproducible governance behavior.

  3. The regulatory landscape is converging but not aligned. The EU AI Act, NIST AI RMF, Singapore IMDA, CSA frameworks, and ISO standards all address agentic governance from different angles with different vocabularies. Organizations operating globally need a unified architecture that satisfies multiple regulatory frameworks simultaneously.

AGF addresses all three: clear accountability via Identity & Attribution (#14) and Provenance Chains (#6), structural evidence via Event-Driven Observability (#10) and auditable gates, and multi-framework alignment through explicit regulatory mappings.

EU AI Act Alignment

The EU AI Act provides the most comprehensive regulatory framework for AI systems. AGF maps to its requirements at the article level.

Applicability timing (as of March 2026): Prohibitions and GPAI obligations have applied since 2 February and 2 August 2025 respectively. High-risk system obligations (Art. 6, 9–15) become applicable 2 August 2026. Organizations should begin preparation now.

High-Risk System Requirements (Articles 6, 9–15)

ArticleRequirementAGF MappingEvidence Produced
Art. 6 — ClassificationRules for classifying AI systems as high-riskRisk classification → ring activation intensity. High-risk = full ring stack activation.Classification decision record, ring activation policy
Art. 9 — Risk managementRisk management system throughout lifecycleThree-level security model + risk-based ring activationSecurity architecture documentation, ring configuration records
Art. 10 — Data governanceData collection, preparation, bias examinationData Governance & Confidentiality (#17) — classification, lineage, consent, retentionData classification records, consent logs, lineage traces
Art. 11 — Technical documentationComplete documentation enabling conformity assessmentProvenance Chains (#6) + versioned control-plane stateFull provenance chain for any output, configuration version history
Art. 12 — Record-keepingAutomatic recording of events relevant to risk identification and monitoringEvent-Driven Observability (#10) + Provenance Chains (#6)Structured event logs, ring boundary events, gate decision records
Art. 13 — TransparencySufficient transparency for deployers to interpret output and detect anomaliesIdentity & Attribution (#14) — full identity context on every action. Provenance chains, confidence signals, gate decision explanations.Identity context records, decision explanation artifacts
Art. 14 — Human oversightAbility to understand capabilities/limitations, monitor operation, intervene/interrupt/haltGovernance Gates (#8) with human interface requirements — evidence presentation, counterfactual framing, rubber-stamping detection.Gate decision logs, human intervention records, halt/containment records
Art. 15 — Accuracy, robustness, cybersecurityResilience against data poisoning, adversarial examples, model manipulation, supply chain exploitationAdversarial Robustness (#15) + Security Architecture + Evaluation & Assurance (#18)Security test results, red team reports, evaluation suite outcomes
Art. 50 — Transparency obligationsUsers informed they are interacting with AI; disclosure for AI-generated contentIdentity & Attribution (#14) — AI-system identification. Provenance Chains (#6) for content provenance.AI identification disclosure records, content provenance records

General-Purpose AI Model Obligations (Articles 51–56)

Most agentic systems are built on general-purpose AI models (GPT, Claude, Gemini, etc.). GPAI obligations have applied since 2 August 2025.

ArticleRequirementAGF Relevance
Art. 53 — GPAI provider obligationsDocumentation, adversarial testing, copyright complianceAGF does not address GPAI provider obligations directly. Verify provider compliance and maintain evidence of provider documentation.
Art. 55 — Systemic risk obligationsAdditional obligations for GPAI models with systemic risk (compute >10²⁵ FLOPs)AGF's Evaluation & Assurance (#18) and Security Architecture support but do not fully satisfy these obligations.

Coverage boundary: AGF governs how organizations deploy and operate agentic systems built on GPAI models. Organizations have a dual compliance obligation: GPAI model compliance (provider responsibility) AND high-risk system compliance (deployer responsibility) when the agent system qualifies as high-risk.

What AGF does NOT cover: Art. 43 (conformity assessment procedures), Art. 73 (serious incident reporting), Art. 72 (post-market monitoring), Art. 26 (log retention ≥6 months). These are organizational/regulatory processes. AGF provides the technical evidence substrate these processes require.

Human Oversight: An Honest Constraint

Art. 14 requires effective human oversight. AGF's honest position: oversight is necessary but its efficacy degrades as the capability gap between overseer and system increases (Engels et al., NeurIPS 2025). AGF addresses this by investing in structural guarantees — rings, verification layers, automated policy enforcement — that function whether or not the human overseer catches every issue.

For compliance purposes: document both the human oversight mechanisms (gates, review interfaces, override capabilities) AND the structural safeguards (automated verification, containment, policy enforcement) that supplement human oversight.

NIST AI RMF Alignment

AGF primitives constitute an agentic AI RMF-style profile — runtime mechanisms that partially address aspects of NIST AI RMF functions in the agentic context.

NIST FunctionScope (NIST)AGF CoversAGF Does NOT Cover
GOVERNEstablish and maintain organizational AI risk governanceRing 2 runtime governance: policy evaluation, gate decisions, delegation authority, Policy as Code (#9)Organizational risk culture, legal compliance processes, external stakeholder engagement, DEI governance
MAPContext framing, risk identification, categorizationRisk classification + ring activation intensity. Risk tier decision tree.Broader stakeholder analysis, societal impact assessment beyond runtime classification
MEASUREQuantify, monitor, assess AI risksEvaluation & Assurance (#18) for pre-deployment. Ring 1 verification + Event-Driven Observability (#10) for runtime.Organizational risk quantification, bias measurement, fairness metrics beyond runtime
MANAGEAllocate resources, plan responses, manage risksTrust Ladders (#11) + Bounded Agency (#7) for runtime risk management. Error Handling (#13) for recovery.Organizational response planning, stakeholder communication, appeal mechanisms, decommissioning

NIST IR 8596 (Cybersecurity AI Profile)

Maps AI agent security onto NIST CSF 2.0's six functions:

IR 8596 Focus AreaAGF Mapping
Securing AI SystemsSecurity Fabric + Identity & Attribution (#14)
AI-Enabled Cyber DefenseSecurity Intelligence + Security Response Bus with human oversight
Thwarting AI-Enabled AttacksAdversarial Robustness (#15)

Key alignment: IR 8596 treats AI agents as security-relevant entities requiring unique identity and agent-specific security controls — not just applications. This validates AGF's position that agent identity must be first-class.

Singapore IMDA Alignment

The IMDA Model AI Governance Framework for Agentic AI (January 2026) — the world's first government-published governance framework specifically for agentic AI.

IMDA DimensionDescriptionAGF Mapping
1. Risk Assessment & BoundingRestrict tool access, sandbox environments, fine-grained permissionsBounded Agency (#7) + Security Fabric + Agent Environment Governance (#19) workspace scoping
2. Accountability & Human OversightDefined roles, HITL for high-stakes/irreversible actions, automation bias safeguardsGovernance Gates (#8) + human interface requirements (evidence presentation, rubber-stamping detection)
3. Technical Controls & TestingOutput accuracy, tool usage validation, policy compliance, gradual rolloutEvaluation & Assurance (#18) + Ring 1 verification
4. End-User ResponsibilityUser training, transparency on agent permissions, active stewardshipIdentity & Attribution (#14) transparency requirements

IMDA explicitly includes "operational environments" as a governance dimension — directly validating Agent Environment Governance (#19).

CSA MAESTRO Alignment

The MAESTRO 7-layer threat model mapped to AGF primitives:

MAESTRO LayerAGF Primary PrimitivesRing Mapping
L1: Foundation ModelsAdversarial Robustness (#15), Evaluation & Assurance (#18)Ring 0
L2: Data OperationsData Governance (#17), Memory-Augmented Reasoning (#12)Ring 0 + Fabric
L3: Agent FrameworksComposability Interface, Bounded Agency (#7), Policy as Code (#9), Agent Environment Governance (#19)Ring 1 + Ring 2
L4: Deployment InfrastructureIdentity & Attribution (#14), Transaction Control (#16), Agent Environment Governance (#19)Security Fabric
L5: Evaluation & ObservabilityEvent-Driven Observability (#10), Validation Loops (#2), Evaluation & Assurance (#18)Ring 1 + Ring 3
L6: Security & ComplianceGovernance Gates (#8), Policy as Code (#9), Trust Ladders (#11)Ring 2
L7: Agent EcosystemMulti-Agent Coordination, Cross-System Trust, DELEGATE signalRing 2 + Cross-cutting

Governance Evidence: What Each Primitive Produces

For auditors: every AGF primitive produces specific, auditable artifacts.

PrimitiveEvidence ArtifactRegulatory Mapping
#1 Separation of Producer/VerifierVerification decision records (pass/revise/fail per output)Art. 15 (accuracy), NIST MEASURE
#6 Provenance ChainsFull decision history for any output — every agent, model, decision, input, contextArt. 11 (documentation), Art. 12 (record-keeping)
#7 Bounded AgencyScope definition records, boundary enforcement logs, escalation recordsArt. 9 (risk management), IMDA Dim. 1
#8 Governance GatesGate trigger records, evidence packages, human decision records, override logsArt. 14 (human oversight), IMDA Dim. 2
#9 Policy as CodeVersioned policy rules, policy test results, policy change audit trailArt. 9, NIST GOVERN
#10 Event-Driven ObservabilityStructured event logs from all rings, correlation recordsArt. 12 (record-keeping), NIST MEASURE
#11 Trust LaddersTrust level history, promotion/demotion records, calibration justificationsNIST MANAGE, CSA ATF
#14 Identity & AttributionAgent identity records, delegation chains, authentication logsArt. 13 (transparency), Art. 50
#17 Data GovernanceData classification records, consent logs, PII handling logs, retention/deletion recordsArt. 10 (data governance), GDPR
#18 Evaluation & AssurancePre-deployment test results, red team reports, regression suite outcomesArt. 15 (accuracy/robustness), NIST MEASURE
#19 Agent Environment GovernanceEnvironment composition records, instruction version history, tool provisioning logsIMDA Dim. 1 (operational environments)

Governance Gates and Human Oversight

Governance Gates (#8) are the primary mechanism for human-in-the-loop oversight in AGF.

How Gates Work

  1. Execution reaches a defined decision point (Ring 2 determines a gate is required)
  2. Execution pauses — the output, context, and evidence are frozen
  3. A human reviewer sees: the output, the provenance chain, the policy evaluation, and the risk classification
  4. The reviewer decides: APPROVE, REJECT, MODIFY, DEFER, or ESCALATE
  5. The decision is recorded with full provenance (who decided, when, what evidence was presented, what they decided)

Human Interface Requirements

AGF specifies requirements for how gates present information to human reviewers:

  • Evidence presentation — the reviewer sees structured evidence, not raw data
  • Counterfactual framing — "what would happen if you approve vs. reject"
  • Rubber-stamping detection — if a reviewer approves too quickly or too uniformly, Intelligence flags it
  • Timeout behavior — fail-closed by default (if no decision within the window, execution halts)
  • Cognitive load management — batch approval rules for routine decisions, escalation for novel ones
  • Cooling-off periods — rate limits on approval requests to prevent fatigue exploitation

Policy as Code

Governance rules are code — versioned, tested, deployed, and auditable. This is the foundation of structural compliance.

What Policy as Code gives you:

  • Auditability: Every policy rule is a versioned artifact. Regulators can inspect what rules governed a decision at any point in time.
  • Testability: Policy rules have unit tests. You can verify "this rule would have blocked X" before deploying it.
  • Change audit trail: Every policy change is a commit with author, timestamp, and justification.
  • Reproducibility: Same policy + same input = same decision. No human judgment variation.

What it does NOT give you: Correctness of the policy itself. Well-formed Policy as Code can still encode incorrect, biased, or incomplete governance rules. Human review of policy content — not just policy mechanics — remains essential.

Governance Maturity Model

LevelNameCharacteristics
1 — Ad HocNo structured governanceManual review, no logging, no policy enforcement
2 — ReactiveBasic controlsBounded Agency (#7), Identity (#14), basic event logging
3 — ManagedStructured governanceFull Ring 1 verification, Policy as Code (#9), Governance Gates (#8), Provenance Chains (#6)
4 — MeasuredData-driven governanceTrust Ladders (#11) calibrated from empirical data, behavioral baselines, anomaly detection
5 — OptimizingSelf-improving governanceSelf-Improving Cycles (#3), Environment Optimization Loop, Ring 3 driving policy updates

For EU AI Act high-risk compliance: Level 3 is the minimum. Levels 4–5 represent best practice for high-stakes domains.

Compliance Assessment Checklist

Accountability:

  • Agent identity traceable to deployment configuration (Identity & Attribution #14)
  • Delegation chains bounded and auditable
  • Human decision records for all gate resolutions

Evidence infrastructure:

  • Structured event log from all ring boundaries (#10)
  • Provenance chain complete for every material output (#6)
  • Policy rules versioned and in source control (#9)
  • Pre-deployment evaluation suite documented (#18)

EU AI Act (if high-risk):

  • Risk classification documented and mapped to ring activation policy (Art. 6)
  • Data governance records — classification, consent, lineage (Art. 10)
  • Human oversight mechanisms documented (Art. 14)
  • Technical documentation sufficient for conformity assessment (Art. 11)

NIST AI RMF:

  • Runtime governance coverage documented (GOVERN)
  • Risk classification and ring activation policy (MAP)
  • Evaluation suite with quantitative metrics (MEASURE)
  • Trust Ladders and escalation procedures (MANAGE)

Related: Security Profile — threat defense architecture. Observability Profile — evidence collection and operational monitoring. Platform Profile — deployment infrastructure underpinning the evidence substrate.

On this page