Framed by an overview of agentic AI’s behavioural trends, this brief highlights its imminent dangers and recommends regulation to address a key policy challenge: the uncertainty of liability frameworks, which lack specific application to AI-related harms. As agentic AI is deployed more and more frequently across a wide range of both low and high-risk sectors, threats become more difficult to mitigate and liability more difficult to assign. Departing from the dominant, tiered risk-based approaches to regulating frontier technology –which fail to consider AI in its current forms and adaptive state– this policy brief explores regulatory solutions that address agentic AI’s inherent design and developmental shortcomings.
In doing so, this policy brief aims to elicit proactive responses to agentic AI-related risks, rather than reinforcing reactive measures that leave users to bear externalised social costs while developers bypass both technical accountability and legal liability. This brief recommends adapting liability laws to ensure that developers are held responsible for the technology they profit from. It also provides key recommendations to ensure that agentic AI is equitable and just by design, drawing on Research ICT Africa’s Just AI Framework of Inquiry to offer insights into needed safeguards, including dataset disclosures, mandatory traceability mechanisms, and interruptability permissions.