Emerging AI Risk
Recent reports suggest that AI has become a factor in African conflict situations, including the use of automated lethal drones and AI analysis in surveillance systems. Elsewhere in the world Ukraine has been subject to extensive cyber attacks which it attributes to Russia, which denies it. While it’s not clear what role AI played in those attacks, AI is rapidly becoming a security concern in cyberconflict.
AI is increasingly deployed in information systems, analytical tools, and physical infrastructure, raising new risks that affect millions of people. Yet the novelty of AI technologies tends to escape current risk assessment and mitigation frameworks. These risks extend from computers, network and information systems to physical infrastructure and even state conflict. An emerging political economy of risk tends to impose AI security risk on those least able to mitigate it.
The Africa Just AI Project (Just AI) seeks to identify risks and uncover how they are structured so that some people, often those least able to do so, bear the brunt of the risk, while others, including those who benefit from and shape the technologies and protocols giving rise to the risks, avoid both risks and the costs of mitigating them.
To understand the plethora of emerging challenges Just AI will map how AI features in the security landscape: as threat, vulnerability, ultimate target, and detection or decision system. Risks include the potentiality of AI systems failing, or human over reliance on them (even in the absence of malicious actors). Ensuring that the majority of the security and cyber security risks are identified and mapped is a crucial first step but since the technologies are rapidly developing a framework that enables understanding and integration of new threats is essential to policy analysis and response as an unperceived risk cannot be mitigated.
To contextualise, at one extreme risks include the Skynet threat: advertent or inadvertent creation of an artificial super intelligence free from human control that may pose an existential threat to all humans. More practically, effects of technologies include machine learning algorithms that exacerbate racial and gender biases in public and private systems, and loss of employment and livelihoods to automated systems.
Before the capacity of African countries to respond to AI security risk can be meaningfully assessed, the first challenge is epistemic: how such risk can be understood in a sufficiently wide but coherent frame. Existing risk management and cybersecurity systems require interrogation to ascertain whether they appropriately apprehend and frame risk associated with AI. To the extent that risks are not recognised in an overarching frame, gaps will prevail and responses remain fragmented. While the response to some risks may be adequate, other risks may be overlooked or even exacerbated by a piecemeal approach. Moreover, the capacities of African governments, security experts, small and medium enterprises, and individuals to deal with AI security risk are unlikely to extend to what is required as they are subjected to the whims of technology providers in the global North and global platform monopolists who remain best placed to mitigate the majority of risks.
Preliminary risk identification
- To ameliorate the effects of AI threats and vulnerabilities, technical, legal, and political responses must be designed and implemented to effectively tackle challenges of risk assessment, prevention, attribution and accountability.
- Policy responses should be based on human rights, as AI policy also carries the risks of impacting on fundamental rights. A risk-based approach to security, which does not clearly define what these risks are, can deteriorate in state control over people’s rights in authoritarian regimes.
- By repurposing RIAs existing research on cybersecurity to address AI risks and security, we can identify what cybersecurity standards, capacity and gaps currently exist in Africa and what is required to respond to AI threats. Key is to understand the threats, limit harms and mitigate risks associated with the use of AI as a cyber security actor, a means to exert soft-power (including dis- and misinformation campaigns, mass surveillance, or political manipulation) or as a means of hard power (weaponising AI during armed conflicts).
- Reliance on AI systems for decision making, especially without independent verification may, due to the human tendency to treat AI as infallible, result in inappropriate decisions.
- Over-reliance on AI to model and assess national security threats may pose a risk to the capacity to exercise national sovereignty and self-determination through overreaction or inaction.
- Cyber attacks that affect vital infrastructure including rail and telecommunications networks, and financial institutions can be exacerbated when the systems rely on AI or AI is used in the attack.
- The global political economy of risk is structured by global technology monopolies and quasi-monopolies effectively setting security standards through their technologies.
- African countries with pressing priorities (i.e. low cost connectivity) and without local capacity to develop AI, will use and import AI technology and adopt policy tools from foreign jurisdictions without interrogating security implications of such technology and policy. These foreign technology and policy tools may, on the one hand, increase risks and perpetuate exploitation, and on the other, may be shaped by paradigms incompatible with the interests of African states.
- Closed-source AI technologies leave no space for scrutiny, so African states should only procure open source. Unfortunately, previous attempts by African states to use open source are likely to be fiercely resisted by global monopolists as they have in the past*.
- Despite 75% of Africans not having Internet access they remain at risk, since AI can target ICT dependent systems. For instance, people without internet access rely on critical infrastructure, including telecommunications, utilities, and financial systems, all of them are increasingly digitalised.
- Attributing AI attacks without evidence will likely lead to diplomatic tensions yet few African countries have developed capacity to deal with cyber diplomacy challenges, although the Open-Ended Working Group (OEWG) on ICT state security provides a good opportunity for all African countries to express their views on cyber diplomacy.
- A set of voluntary and non-binding norms on responsible state behaviour in cyberspace adopted in 2015 by the United Nations Group of Governmental Experts (UN GGE) are necessary responses to AI security risk but insufficient, as AI technology has developed dramatically since their creation.
The way forward
This theme of the Just Africa AI project intends constructing a taxonomy of AI security risks for Africa, seeking to identify and understand which technologies, dynamics, actors, harms and factors construct and distribute AI security risk. The extent to which closed source exacerbates risk will be interrogated. The Project will trace the political economy of AI security risk, especially accountability, onus and risk mitigation. These are important questions for future inquiry and necessitate researching the challenges, limitations and constraints of existing initiatives in responding to the risks.