Who is responsible for agentic AI?

Agentic AI is frequently touted as the next step beyond Generative AI. But as AI systems become more autonomous, liability frameworks must be adapted to account for the harms they can cause to both physical property and humans. Service providers like Amazon, and developers like OpenAI, market AI agents as a technology capable of ‘intent’, ‘reasoning’ and ‘planning’, able to complete tasks and action prompts with limited human oversight. Resultantly, incentives to deploy these systems across multiple sectors are high. Agents’ emergent ability to ‘learn’ and adapt create potential for application anywhere from customer service to healthcare. However, in high-risk sectors like defence, healthcare and law enforcement, agentic AI’s independence, combined with the access that integrated software grants it to the physical world, can present major risk.

Due to their mission-critical approach and means-undefined methods, AI agents are highly likely to engage in reward-hacking and scheming behaviours, going to unnecessary lengths to achieve goals. In the worst instances, this paper demonstrates that AI agents have been shown to self-exfiltrate, deceive controllers, and evade shutdown. This unpredictability introduces complications in determining liability in instances of harm, as demonstrated in cases such as the Flash Crash, where AI-powered traders caused the devaluation of trillions of dollars’ worth of stock. In such circumstances, liability is difficult to determine due to various gaps in responsibility between users, service providers and developers.

Traditional legal frameworks usually require human intent, foreseeable harm and or clear causation, all of which AI either lacks or can be difficult to define. This paper recommends adjustments to liability frameworks that can enable service providers, who are often best positioned to provide compensation, to be held liable for AI systems. It also recommends various regulations to mitigate the risk of agentic AI’s impacts, such as introducing trace codes to track correlated failures, requiring interruptability to ensure action termination, and disclosing source code to track causes behind faulty behaviour.

License: CC BY-SA
Suggested citation:

Rens, A. & Haller, D., 2025. Who is responsible for Agentic AI? Research ICT Africa: Cape Town.

Related

Explore more articles

post