Agentic Risk & Capability Framework
We introduce the Agentic Risk & Capability ("ARC") Framework, a technical governance framework that enables organisations to manage the safety and security risks of agentic AI systems through a risk-based approach. We do this by:
- Defining a hierarchical taxonomy of capabilities that agentic AI systems may have, depending on their use case and how they are designed
- Distinguishing between baseline risks (applicable to all agentic AI systems) and capability-specific risks (applicable to agentic AI systems with that capability)
- Mapping each risk to a set of technical controls which help to mitigate the risk to an acceptable level
- Providing a framework to scale governance of agentic AI systems, especially for large organisations
This website
We have organised our content into four sections:
- Introduction: We explain the overall concept of the ARC Framework, how it distinguishes between baseline risks and capability-specific risks, and how it fills a much-needed gap in current discussions on agentic AI governance.
- Baseline: We explain what baseline risks are, outline a set of baseline risks that apply to all agentic AI systems, and list the corresponding technical controls for tackling those risks.
- Capabilities: We explore the concept of capabilities, go through our proposed taxonomy of agentic capabilities, describe the safety and security risks arising from each capability, and set out the relevant technical controls we recommend for mitigating those risks.
- Implementation: We provide a plan for operationalising the ARC framework in an organisation, using stylised examples to help the reader understand the practical implications of the framework.
For first time readers, we suggest following the order of the contents above. For those who are more familiar with agentic AI governance or with the ARC framework, please feel free to jump ahead to the relevant sections.
About us
The ARC Framework is developed by the Responsible AI team in GovTech Singapore's AI Practice. We develop deep technical capabilities in Responsible AI to improve how the Singapore government develops, evaluates, deploys, and monitors AI systems in a safe, trustworthy, and ethical manner.
In developing this framework, we work closely with other teams in the Singapore government, such as the Ministry for Digital Development and Information and the Cybersecurity Agency of Singapore. We are grateful for their feedback and contributions, which have helped to make this framework more effective, robust, and thorough.
To reach out to us, please fill out the Google form here.
Citing our work
To cite our work, please use the following BibText citation:
@article{agentic_risk_capability_framework,
title = {Agentic Risk & Capability Framework},
author = {Khoo, Shaun and Foo, Jessica and Lee, Roy Ka-Wei},
year = {2025},
month = {July},
url = {https://govtech-responsibleai.github.io/agentic-risk-capability-framework/}
}
Alternatively, you may use the APA-formatted citation below:
Khoo, S. & Foo, J. & Lee, R. K.-W. (2025) Agentic Risk & Capability Framework. URL https://govtech-responsibleai.github.io/agentic-risk-capability-framework/
Note: This page was last updated on 7 Aug 2025.