Comparison to other frameworks
This page compares the ARC Framework with six major AI governance frameworks and standards: NIST AI RMF 1.0, EU AI Act, Dimensional Governance, OWASP Agentic AI, Google SAIF 2.0, and CSA MAESTRO. The comparison evaluates each framework across multiple criteria including target audience, unit of analysis, prescriptiveness, risk specificity, and practical implementation support.
How to use the Comparison Table
- Click anywhere on the row (with ► arrows) to expand all frameworks for that criterion (multiple rows can be expanded)
- Scroll horizontally to view all frameworks - look for the animated arrow indicator (→) on the right
- Colour coding: 🟢 Green (strong/desirable) | 🟡 Yellow (moderate/conceptual) | 🔴 Red (limited/general) | ⚪ Grey (not addressed)
- Click anywhere on the row again to collapse expanded rows
- Mobile users: Table is optimized for horizontal scrolling on smaller screens
| Criterion |
ARC Framework
Capability-centric governance for agentic AI systems
|
NIST AI RMF 1.0
Lifecycle risk management for AI systems
|
EU AI Act
Risk-based legislation for AI in Europe
|
Dimensional Governance
Adaptive governance via continuous dimensions
|
OWASP Agentic AI
Threat-modelling for agentic attack surfaces
|
Google SAIF 2.0
Defence-in-depth principles for enterprise agents
|
CSA MAESTRO
Layered security architecture for agentic systems
|
|---|---|---|---|---|---|---|---|
|
Primary Audience
Who is expected to use the framework
|
Org & security teams
Organisation governance teams, AI developers, product managers, security engineers, and compliance officers working cross-functionally on agentic systems. Designed for teams managing risks throughout the organisation and product lifecycle who need structured governance without heavy certification overhead.
|
Broad AI actors
AI actors (organisations & individuals) across all sectors voluntarily using the framework. Use-case agnostic and sector-neutral, intended as foundational guidance for anyone designing, developing, or deploying AI systems. Broad applicability but less specialised for agentic contexts.
|
Regulators & providers/users
Providers and users of AI in the EU; regulators and compliance officers enforcing the Act. Mandatory for high-risk AI system providers placing products on EU market and deployers using AI in the EU. Clear regulatory audience with legal obligations.
|
Multi-stakeholder (policy)
Policymakers, oversight bodies, and governance leads designing governance structures. More conceptual than operationally focused, serving those setting governance policy and defining oversight mechanisms. Less useful for frontline implementation teams.
|
Security engineers & practitioners
Security engineers, red teams, and blue teams building and defending agentic applications. Designed for practitioners responsible for securing agentic systems with focus on threat identification, attack surface analysis, and technical mitigations.
|
CISOs & enterprise builders
CISOs, security architects, and enterprise builders integrating agentic AI into enterprise security architectures. Bridges security principles with practical enterprise deployment considerations, designed for security leadership.
|
Security engineers, researchers & developers
Security engineers, researchers, and developers working on agentic AI systems. Focuses on practitioners who need layered architectural guidance for building and securing agentic systems with defence-in-depth approaches.
|
|
Unit of Analysis
What is the main governance object or focal point
|
Capability–risk mapping
Capabilities with components and system design. Organises governance around what the system can do (capabilities) and maps these to specific risks and controls. Focuses on the powers the system possesses (e.g., internet access, file system access, code execution) as the primary unit for risk analysis.
|
Lifecycle risk functions
AI systems and lifecycle stages organised into four functions: Govern, Map, Measure, and Manage. Categories and subcategories within each function provide structure. Lifecycle-oriented approach addresses systems broadly across development and deployment phases.
|
Risk categories (tiered)
AI systems defined by risk category: unacceptable, high, limited, and minimal risk. High-risk AI triggers compliance obligations. Agentic systems recognised as spanning ecosystems with continuous risk profiles requiring dynamic oversight.
|
Continuous dimensions
Continuous dimensions of decision authority, process autonomy, and accountability. Analyses how control and responsibility shift as systems move along these dimensions towards greater autonomy. Conceptual lens rather than concrete system components.
|
Threat/attack surfaces
Threats and attack surfaces for agent workflows organised by component: reasoning loops, memory systems, tool use, identity/permissions, oversight mechanisms, and multi-agent interactions. Security-first perspective analysing where vulnerabilities exist.
|
Principles & control families
Principles and control families across the agent lifecycle. Organises security around core principles (human controller, limited powers, observability) and associated control families. Strategic rather than granular component analysis.
|
Layered architecture
Layered security architecture analysing threats at each layer of agentic systems. Examines security across model layer, agent layer, and application layer with layer-specific threat models and controls. Defence-in-depth architectural view.
|
|
Prescriptiveness
Level of detailed requirements vs. high-level principles
|
Medium–High
Structured risk→control mapping with implementation checklists and governance workflows. Provides detailed guidance on selecting controls based on capability profiles and risk thresholds. Not legally binding or certifiable but highly structured with clear decision points.
|
Low
Voluntary guidance with tasks and outcomes but not prescriptive in implementation. Provides categories, subcategories, and suggested actions that organisations adapt to their context. Not certifiable or legally binding. High-level principles requiring significant interpretation.
|
High
Legally binding requirements with strict obligations for high-risk AI. Mandatory policies, documentation, conformity assessments, logging, transparency, and human oversight. Non-compliance results in significant penalties. Most prescriptive framework with clear legal requirements.
|
Low–Medium
Conceptual thresholds with limited concrete controls. Provides framework for thinking about governance dimensions and thresholds but offers few specific implementation requirements or control specifications. Requires significant interpretation and operationalisation.
|
Medium–High
Enumerated threats with specific mitigation strategies. Provides catalogue of concrete threats and corresponding mitigation techniques. More prescriptive than conceptual frameworks with actionable security guidance, though allows implementation flexibility.
|
Medium
Principle-led control families rather than detailed requirements. Provides security principles and control families offering structure for security architects to design implementations, but requires organisational interpretation and adaptation. Not a full specification.
|
Medium
Layer-specific threat models with risk-based control guidance. Provides structured approach to identifying and mitigating threats at each architectural layer. Balances prescriptive threat enumeration with flexibility in control selection and implementation.
|
|
Coverage of Agentic AI
How explicitly the framework addresses agentic AI capabilities
|
Strong – capability-driven
Explicitly designed for agentic AI with capability lens tailored to agentic powers, autonomy, tool use, and planning. Directly addresses agent-specific risks including excessive agency, tool misuse, goal misalignment, and uncontrolled autonomy. Purpose-built for agentic systems.
|
General autonomy only
General AI coverage not agentic-specific. Addresses AI systems broadly but does not explicitly cover agentic autonomy, tool use, planning, or multi-agent systems. Users must interpret lifecycle guidance for agentic contexts without specific tailoring.
|
Implicit via risk-based approach
Emerging agentic guidance being developed. Act adapting to agentic AI with guidance on continuous risk management, dynamic guardrails, and real-time monitoring. Recognises challenges of autonomous systems but standards still under development. Coverage evolving.
|
Conceptual – governance dynamics
Conceptual focus on autonomy and governance dynamics through dimensions of authority, autonomy, and accountability. Addresses governance implications of agent autonomy but lacks specific guidance on agentic capabilities like tool use, planning, or multi-agent systems.
|
Strong – agent-specific threats
Explicitly addresses agent-specific threats across reasoning, memory, tools, identity, oversight, and multi-agent interactions. Strong coverage of agentic components including tool misuse, memory poisoning, multi-agent trust issues, and reasoning loop manipulation. Purpose-built for agentic security.
|
Strong – agent-explicit principles
Agent-explicit principles tailored to agents: human controller requirement, limited powers, and plan/action observability. Directly addresses agent-specific security challenges including autonomous decision-making, tool access, and dynamic behaviour. Strong agentic focus.
|
Strong – layered agentic threats
Layer-specific analysis of agentic threats across model, agent, and application layers. Addresses unique security challenges at each layer of agentic systems including prompt injection, tool abuse, and multi-agent coordination risks. Comprehensive agentic coverage.
|
|
Evidence/Evaluation
Whether the framework cites empirical research or operational data
|
Conceptual & examples
Framework development with worked examples. Based on conceptual analysis and practical case studies. Lacks formal empirical evaluation or operational data from deployed systems. Future work includes validation studies to assess framework effectiveness in practice.
|
Multi-stakeholder development
Developed through multi-stakeholder engagement including workshops, public comments, and cross-sector input. Living document intended to evolve with AI practices. Informed by diverse perspectives but lacks formal empirical evaluation of framework effectiveness.
|
Legislative process
Law enacted through EU legislative process; harmonised standards and technical specifications under development collaboratively. No formal evaluation of framework effectiveness yet given recent enactment. Evidence base developing through standardisation and enforcement.
|
Conceptual with case studies
Conceptual framework grounded in governance theory with illustrative case studies. No empirical studies validating the framework's effectiveness or testing its practical application in organisational settings. Primarily theoretical with limited operational validation.
|
Practitioner-grounded
Grounded in security practitioner experience with real-world examples of agentic security issues. Based on community knowledge and observed attack patterns. No formal empirical evaluation or systematic testing of mitigation effectiveness across diverse contexts.
|
Narrative & internal experience
Based on Google's security engineering experience and enterprise AI deployment insights. Policy and engineering narratives drawing on internal operational experience. No formal benchmarks or empirical studies validating effectiveness across diverse organisational contexts.
|
Practitioner-led
Developed by cloud security practitioners and researchers with industry experience in securing AI systems. Combines theoretical security principles with practical implementation insights. Limited formal empirical validation but grounded in operational security practice.
|
|
Typical Artifacts
Deliverables or tools the framework produces
|
Risk register & capability profile
Produces risk registers, capability profiles, control tier checklists, and sign-off workflows. Includes templates for documenting capability assessments, risk evaluations, control implementations, and governance decision points. Designed for organisational accountability and traceability.
|
Use-case profiles & outcome categories
Risk management tasks, outcomes, organisational profiles, and implementation playbook. Organisations create tailored profiles documenting how they address each function. Playbook provides sector-specific implementation guidance. Focus on process documentation.
|
Registrations & documentation
Technical documentation, conformity assessments, risk management plans, comprehensive logs, and living documentation. Extensive records required for high-risk systems including AIMS scope, policies, objectives, control implementations, audit reports, and management reviews.
|
Dimension definitions & thresholds
Dimension definitions, threshold guidance, and oversight role descriptions. Provides conceptual tools for governance analysis rather than operational artifacts. Outputs are primarily analytical frameworks and governance design principles rather than implementation templates.
|
Threat navigator & mitigation sheets
Threat navigator, threat/mitigation sheets, and red-team prompts. Security-focused artifacts including threat catalogues organised by agentic component, detailed mitigation guidance, and practical testing materials for security teams to assess vulnerabilities.
|
Principles & control families
Security principles, control families, CISO guidance, dynamic least-privilege frameworks, and robust logging architectures. Strategic guidance for security leaders and architectural patterns for security engineers. Less focus on detailed implementation specifications.
|
Layer-specific threat models
Layer-specific threat models, security controls mapped to architectural layers, and risk assessment templates. Provides structured artifacts for analysing security at model, agent, and application layers with corresponding control implementations.
|
|
Control Selection Logic
How safeguards are selected and prioritised
|
Capability- & risk-based thresholds
Controls selected by capability profile and deployment context. Uses impact×likelihood thresholds to determine minimum control sets. Tailored to specific capabilities (e.g., internet access requires different controls than file system access). Tiered control levels scale with risk.
|
Risk-based & flexible
Organisations identify, measure, and prioritise risks contextually. Framework provides process for risk management but not specific risk→control mappings. Organisations define their own risk appetite and select controls accordingly with significant flexibility.
|
Risk category obligations
Based on risk category and continuous assessment. Control requirements vary by risk classification with extensive controls for high-risk AI. Agentic guidance adds continuous risk assessment, dynamic oversight, and shared responsibility for evolving systems.
|
Dimensional thresholds
By dimensional thresholds where higher autonomy requires stricter oversight. Suggests that as systems move towards greater autonomy along dimensions, oversight requirements increase. Provides conceptual logic but not specific risk→control mappings or control catalogues.
|
Threat-driven
Controls selected based on threat presence and attack surface. For example, tool misuse threats require sandboxing and least-privilege access; memory threats require input validation and integrity checks. Direct mapping from identified threats to specific mitigations.
|
Principle-aligned (defence-in-depth)
Control selection guided by core principles: limit powers, ensure observability, maintain human controller. Dynamic least-privilege limits agent powers; observability requirements ensure plan/action monitoring; human-in-the-loop for critical decisions. Hybrid defence approach.
|
Risk-based layered controls
Layer-specific risk assessment driving control selection at each architectural layer. Controls chosen based on risk profile at model, agent, and application layers. Defence-in-depth approach with coordinated controls across layers addressing threats at appropriate levels.
|
Key Takeaways
When to use each framework
Each framework serves distinct governance needs depending on your context and objectives.
For organisations seeking structured, capability-aware governance across diverse agentic systems without heavy certification overhead, the ARC Framework offers a practical starting point. Those establishing broader AI risk management processes should consider NIST AI RMF for foundational guidance, particularly for general AI systems, while the EU AI Act becomes mandatory for any AI providers or users operating in Europe, especially when deploying high-risk or agentic AI.
At a more conceptual level, Dimensional Governance supports policy development and oversight structure design through its focus on high-level governance architecture. Security-focused teams will find complementary value in OWASP Agentic AI for technical hardening and threat-based testing, Google SAIF 2.0 for enterprise security integration and CISO-level strategy, and CSA MAESTRO for layer-specific security analysis with defence-in-depth implementation.
Complementary use
Many frameworks can be used together to create a more comprehensive governance approach.
The ARC Framework pairs particularly well with other standards: combine it with OWASP to leverage ARC Framework's governance and control selection alongside OWASP's detailed security threat analysis and red-teaming capabilities, or integrate it with NIST AI RMF to apply NIST's lifecycle functions while implementing ARC Framework's capability-specific controls. For organisations operating in Europe, ARC Framework provides a practical way to operationalise EU AI Act requirements for high-risk agentic systems.
Security-focused teams can achieve comprehensive coverage by combining SAIF's enterprise principles with OWASP's threat catalogues and MAESTRO's layered architecture for defence-in-depth.
At the governance level, Dimensional Governance and ARC Framework complement each other well — use Dimensional for policy-level governance structure and strategic framing, then apply ARC Framework for operational implementation and day-to-day risk management.