AI-powered systems are becoming more pervasive across almost all economic sectors. This diffusion ranges from information systems to analytical tools and physical infrastructure. It is also true that advocates overstate the capability of some of these systems while understating the complexity of real-world tasks they seek to solve. Furthermore, definitive forecasting about the impact of these systems is oftentimes driven by unwarranted confidence about matters of great uncertainty.
The same is true in relation to cybersecurity. ‘AI security risk’ is unevenly distributed and disproportionately affects those who are least prepared and capable of dealing with it, especially in Africa. The term refers to the potential negative impacts of AI-powered cybersecurity attacks and data breaches on individuals, groups, organisations, and societies. Findings from our research suggest that these developments introduce new risks that may cause widespread and diverse harms in IT infrastructure, thereby causing cascading problems for the people who depend on these systems. These risks are typically not adequately captured by existing risk-assessment and mitigation frameworks as they go beyond the scope of conventional information integrity paradigms. They involve other aspects of security, such as human, social, economic, and political security.
In response to a call by the United Nations Secretary-General for papers on Global AI Governance, this document was submitted for consideration to the High-level Advisory Body on Artificial Intelligence. This submission addresses governance of artificial intelligence security risks and harms in Africa, drawing upon several years of research conducted by Research ICT Africa. It aims to provide readers with a contextual understanding of AI systems and the implications for cybersecurity gaps and other related issues for everyday life within Africa as this applies to global governance challenges.