As AI is increasingly deployed in information systems, analytical tools, and physical infrastructure, new risks have emerged (Creese, 2020). AI that is increasingly deployed in systems that affect millions of people introduces new risks that may result in diffuse harms (Creese, 2020). Yet the novelty of AI technologies tends to escape current risk assessment and mitigation frameworks. In particular, the harms and pathways associated with AI security go beyond established cybersecurity risk mitigation. These risks encompass other security risks and together are referred to as AI security risk. An emerging global political economy of risk tends to impose AI security risk on those least able to mitigate it, particularly in Africa .
Our research seeks to identify these risks, but from the outset to also identify how these risks are framed globally so that some actors, especially in Africa, bear the brunt of the risks, while other actors, many of which are in the global North, avoid the risks, or the costs of mitigating them or benefit from creation of the risk. A political economy analysis of AI security risk requires a more subtle and flexible approach than conventional political economy. This is because the role of institutions, in particular state run or controlled institutions is to some extent supplanted by phenomena that may not be as readily recognised as institutions such as standards (and standards bodies) as well private sector actors which wield power through technical protocols. The analysis nevertheless seeks to identify the dynamics of power, the role of information in conferring power, and the actors that wield that power. Who bears risks, who has the power to mitigate them and at what cost are important questions for this inquiry.
Just as AI risk is not a unitary phenomenon AI is itself not a single technology but instead refers to a number of techniques that rely on extensive data to generate algorithms. The production of data is a deeply political process, similarly algorithms that reflect the data used to produce them are deployed to different ends and embedded in different social and technical contexts.
In the context of cyber-crime and interstate conflicts AI-attacks can be used as a means of hard and soft power (Siedler, 2016). An immediate public policy priority is to ascertain and specify the range, extent and implications of AI security risk, based on the potential harm that AI deployment can cause to digital networks and systems, societies, organisations, and people.
To simplify the wide spectrum of security implications of AI, we will devise a taxonomy which looks at AI as a threat, as a vulnerability to exploit (i.e. target), and as an offensive or defensive mechanism. As a means of hard power, AI can be a criminal actor, can be an enabler or a vector of cyber-attacks. It can be weaponised during armed conflicts. It can also be a vulnerability:, it can be exploited and become a target of cyber-crime or cyber operations. But AI can be a source of threat without intentional action, through over reliance on AI or through unanticipated failure of AI
without intervention by another threat actor.
The type and scope of harm emerging from AI as a threat actor or as an enabler and vector of an attack are multiple. AI actors and AI-enabled attacks can damage technological and digital systems (i.e. networks, platforms, applications, or databases) compromising the confidentiality, integrity and accessibility of data and information. More broadly, AI-enabled attacks can have repercussions into economic and political life in Africa, or negative effects for organisations and individuals. As a means of soft-power, it can be used for dis- and misinformation campaigns, mass surveillance, or
for political manipulation.
AI can be used not only offensively to commit cybercrime and to facilitate cyber operations. It can be used also defensively to detect, expose and prevent cyber-attacks. AI can be used as a defensive mechanism to detect, ameliorate and fix damages emerging from a cyber-attack. Further, AI can be used to support cyber-crime detection or as a deterrence mechanism. Lastly, AI can support the development of evidence-based cyber norms, policy, and legislation.
In order to prevent risks and ameliorate the effects of AI threats and vulnerabilities, technical, legal, and political responses need to be designed and implemented, to effectively tackle challenges of risk assessment, prevention, attribution and accountability. Building on research on cybersecurity carried out by RIA in the IDRC funded cyberpolicy think tank initiative and the collaboration with the Oxford Martin School that resulted in the Cybersecurity Capacity Centre at the UCT this research into AI risks and security will identify what current cybersecurity standards and capacity do exists in Africa, and the gap between these and what is required to respond to threats. RIA will also leverage the cybersecurity research, capacity building and engagement in global governance to understand the threats, limit harms and mitigate risks associated with the use of AI as an cyber-crime actor, as a vector of cyber-attacks, or a means to exert soft-power.
|AI Risk and CyberSecurity|
Calandro, E., & Rens, A. (2022). AI Risk and CyberSecurity (Policy Research Centres on Artificial Intelligence for Development in Africa: Supporting Just AI through Research and Policy Development) [Concept Note]. Research ICT Africa. https://researchictafrica.net/publication/ai-risk-and-cybersecurity/