African perspectives are missing from AI safety 

On Tuesday, 9 September, a multidisciplinary cohort met to discuss Artificial Intelligence (AI) Safety from an African perspective in a webinar hosted by the Global Center on AI Governance. RIA’s Dr Scott Timcke joined Dr Rachel Adams and Dr Samuel Segun (Global Center on AI Governance), and Darlington Akogo (minoHealth AI) to take a comprehensive look at the pervasive risks that AI poses, particularly to African countries where weak cybersecurity infrastructure, data literacy and governance leave vulnerable populations highly susceptible to AI harms. 

Referring to the discussion paper, Toward an African Agenda for AI Safety, Dr Segun unpacked the continental relevance of AI safety: “Africa’s perspective to this conversation has been vitally missing, likely because the models that are built are not necessarily built with Africa in mind.” African people in particular face social costs such as bias and discrimination in AI technologies, as well as socio-political and environmental costs in the form of electoral interference, neocolonial data extractivism, compute capacity imbalances, and disproportionate exposure to climate change. These externalised costs are often the result of uninclusive design processes, a side-effect of concentrated AI development in the Global North. 

This image define AI safety as an interdisciplinary field concerned with identifying and addressing the security, ethical, socioeconomic, environmental, technical and existential risks and harms of highly advanced AI systems.

Source: Global Center on AI Governance 

The need for African AI Safety institutions

Yet, Africa’s own AI safety mechanisms are equally missing. The paper cites that current national strategies and the African Union Continental AI Strategy contained severe gaps in technical capacity, governance and participation, “…only 26.8% of states measured showed any concrete activity on safety, accuracy or reliability.” Globally, AI Safety and Security Institutes are being formed in the US, UK, EU, Japan, Singapore, South Korea and Canada. In Africa, there is still no dedicated policy centre or research institute, despite several calls to establish a regional AI safety task force under the African AI Council, and an AI safety institute on the continent. 

In such a diverse continent, where histories of colonialism and uneven development exacerbate technological harms, global AI debates must include African voices. The authors write, “The omission of African perspectives in these debates can lead to governance measures that do not address the complex safety and security challenges posed by AI, which the continent faces.” It goes on, “given the priority being placed by governments and development partners on the use of AI to fast-track socio-economic development in Africa, it is critical that the safety considerations of these technologies are considered and addressed alongside their adoption.”

Risk categories for AI 

The discussion provided a comprehensive overview of Africa’s AI Risk Profile, noting three key categories, including Malicious Use, Malfunction and Systemic Risks. 

Malicious use

  • Focusing on international abuse or weaponisation of AI for harmful activities.
  • Examples include: Risk to democracy and human rights; manipulation of public opinions; cybercrime and financial malpractice; and militarisation, abuse and loss of control. 

Malfunction

  • Focusing on negative physical, psychological, reputational, financial and legal outcomes resulting from AI system flaws. 
  • Examples include: Unreliability for local users given models’ training primarily on Euro-American datasets; and absence of diverse representative data creating algorithmic bias towards race, gender, religion and politics. 

Systemic

  • Focusing on the broader societal consequences arising from AI deployment, beyond just advanced AI. 
  • Examples include: Labour disruption risks resulting from automation; environmental risks resulting from increasing carbon dioxide emissions from frontier models. 

Case study: Moremi Bio Agent

Demonstrating the reality of AI risks, Akogo shared learning experiences from his research into the Moremi Bio Agent, which he prompted to design toxic substances without safety guardrails. The shocking results can be found in his work, Can Large Language Models Design Biological Weapons? Evaluating Moremi Bio. Results demonstrate that LLMs have significant potential for misuse. The paper notes, “The accessibility of such technology to individuals with limited technical expertise raises serious biosecurity risks. Our findings underscore the critical need for robust governance and technical safeguards to balance rapid biotechnological innovation with biosecurity imperatives.” 

Akogo expanded on this, saying, “Right now, someone somewhere in the world has access to technology that they could use to design toxic agents that none of us are prepared for. If they released it somewhere, no one would see it, and no one would know what happened. I’m not trying to scare or spread fear, but then, I don’t think we are taking this seriously enough”. He went on, “We don’t have to wait until AI is misused on that level before we decide to act. The unfortunate thing is that the technology is developing so rapidly that if we wait until then, we might not get that option.” 

Preparing for AI risks

His discussion prompted Scott Timcke’s thinking on AI safety as a holistic issue, one which requires justice-oriented thinking, saying, “AI safety without justice is just safety for some.”Addressing the human security paradigm and the full range of scientific, technical, ethical, and commercial aspects of AI safety, Timcke urged audience members to consider AI safety as a peacekeeping exercise, one that needs continental coordination with diverse voices. Like Akogo, he agreed that “this is an area that we cannot afford to neglect.” Timcke called for 5 points of action, including: 

  1. Developing institutional architecture for an African AI Safety Institute that is transnationally oriented;
  2. Creating a transparency framework at different levels, including the technological and decision-making levels;
  3. Enhancing knowledge sovereignty and national capacity through inclusive AI development, built with varied languistic, cultural and political data;
  4. Governing AI with economic justice in mind, taking into account how LLMs can impact labour markets and create data dependencies; and
  5. Facilitating continental coordination given that AI safety is a transnational matter.

In summary, Timcke emphasised that AI safety is not just a technical issue, with simple solutions such as red-teaming and safeguarding. Rather, it is broad, and requires the discernment to link governments, security agencies and numerous other responders to coordinate all stakeholders who would be impacted. His plan aligns with the White Paper’s own recommendations to prioritise human rights protections in policy approaches, promote public AI literacy and awareness, develop early warning systems and benchmarks in the 25+ African languages, and introduce an annual AU-level AI Safety & Security Forum. 

The discussion concluded with a clear call to action. Rather than waiting for solutions imposed by Global North actors, Africans can respond to AI safety threats by developing their own institutions and governance practices that prioritise justice, human rights, and the protection of Africa’s diverse people, political and economic contexts. 

Related