The relationship between the law and technology can be likened to that of cartoon characters Tom and Jerry; one constantly chasing the other but never quite catching up. Likewise, the law is constantly trying to stay apace with technological advancements. This ‘stalemate’ can have devastating effects on aspects of contemporary democracy.
From a regulatory perspective, the rapid advancement of artificial intelligence (AI) processes has introduced new challenges. Deep fakes, voice cloning, and algorithmic influencing have become more convincing as a result of the development of generative adversarial nets (GANs) and large language models (LLMs). These AI applications can undermine democratic processes by nudging people into promoting third-party agendas.
The Cambridge Analytica scandal well demonstrates the potential dangers of unchecked algorithmic processes. The 2023 Nigerian elections saw the use of AI-generated voice ‘recordings’ between Labour Party presidential candidate Peter Obi and David Oyedepo, purportedly ‘proving’ them plotting to rig the election. These voice-cloned messages were circulated online via platforms. The recording was later confirmed to be a deep fake by fact checkers and AI software analysts. As the example about Obi and Oyedepo suggests, AI will shape electoral processes. I share some preliminary analysis and considerations that are emerging from ongoing research being undertaken by the AI & Democracy team in RIA’s Africa Just AI project.
Existing legal frameworks
On a continental level, the African Union has not yet devised a framework that regulates the use of AI in electoral processes. To date, the Sharm El Sheikh Declaration working group on AI as well as the African Union Artificial Intelligence Continental Strategy for Africa (AACS) have not looked into AI and democracy. In the absence of any clear continental directives on AI, we must be guided by the AU Declaration on the Principles Governing Elections in Africa and the African Charter on Democracy, Elections, and Governance (ACDEG). Both directives set ideals for democratic elections such as the respect for human rights, transparency and so on. However, both do not address the transformative potential and challenges introduced by new technologies in general, or AI in particular.
In the absence of any clear directives at a continental level, Research ICT Africa’s preliminary findings are that no African country has yet begun to legislate around the role of AI in democratic processes. As RIA’s study focuses on the potential negative use of AI to influence voter sentiments through deep fakes, voice cloning and algorithmic programming, amongst others, we have found that, generally, the use of AI remains legal in the absence of a law stating otherwise. However, voter manipulation, fraudulent programming and dissemination of falsehoods that undermine democracy are illegal in the studied countries. While this may not be within the context of AI, it is perceivable that by operation of law, these laws should extend to AI uses.
In Nigeria, information dissemination online is regulated by the Cybercrimes Act which criminalises actions including spamming, hacking, cyberstalking, identity theft and impersonation and the dissemination of false information. Sierra Leone has the Cyber Security and Crime Act which criminalises hacking, computer-related forgery, identity theft, cyberstalking, cyberbullying, and cyberterrorism. Zimbabwe on the other hand has the Cyber and Data Protection Act 2021. While the main focus of the law is data protection and privacy, the Act also amends the Criminal Law Act to provide for cybercrime offences that could undermine the integrity and security of democratic processes such as hacking, spoofing/identity theft, spreading malware, or manipulating online information.
Outside of cybercrime laws, Sierra Leone also may employ the Public Order Act which has been amended to include the use of digital media to disseminate information. The Act criminalises offences such as sedition, libel and defamation, and false news. In Nigeria, the Protection from Internet Falsehoods and Manipulations Bill (also known as the Social Media Bill) is still pending before the Senate. If passed, the law will outlaw the dissemination of false claims and enable counteractions to address the effects of such transmissions especially as they relate to public order, national security, public health, public safety, or public finances. Ultimately, the law empowers the government to order Internet service providers to block access to online content that is deemed to be false or harmful.
Are existing laws adequate? Not quite, but laws that criminalise the creation and spreading of false information and fraudulent manipulation do exist. What remains to be seen is whether the various courts can extend the definitions of crimes such as identity theft, impersonation, manipulating online information and so on to include the use of AI to commit these crimes. While the courts are capable of developing the law adequately, a lack of uniformity coupled with limited capacity and inadequate understanding of AI means that enforcement will be erratic and uneven.
Foreign technology dependence and relatively low digital literacy rates mean that AI tools in Africa have slightly different effects than those in the West, Latin America, or South East Asia. As Africa has the second largest growing user base of social media through which most of these AI tools are deployed, more research is needed to better understand the comparative similarities and differences. It will be important to clarify the legal position of AI applications in democratic processes as soon as possible. As deep fake techniques and algorithmic manipulation methods improve, so too must the legal frameworks designed to combat them. But then again, a lack of regulation, especially in this part of the world, may be deliberate.