While policymakers – many successfully lobbied by the big tech – have in the past shied away from governing artificial intelligence (AI), fearing the stifling of innovation, recent developments in generative AI have amplified the known harms of data-driven technologies and the urgency to mitigate the risks. In addition to the mass production of harmful content, the potential of AI as a threat to nations and global stability has become evident – particularly in the context of warfare and the capacity to undermine democracy. Without delving into any industry regulations, this article attempts to map out common and diverging regulatory frameworks emerging from national, regional, and global fronts.
As they currently stand, we now have a mix of frameworks and approaches fronted by different institutions based on their mandates, institutional politics, and advantages. A mapping of these regulations and governing bodies is needed in order to identify critical gaps. It is worth noting that currently, there is no institution mandated or prepared to tackle global AI governance in its entirety, akin, for instance, to the International Atomic Energy Agency (IAEA) mandated to govern nuclear energy, which also poses an existential threat to the world. Moreover, there are question over whether such a centralised power for AI governance would be suitable considering its breadth of scope as a technology.
An assessment and comparison of the various regulatory frameworks of AI, both regionally and nationally, suggests that there a number of points of convergence from these disparate initiatives. These include the need for AI to abide by the existing laws and the duty of care placed on AI service providers to protect vulnerable groups, such as children, those with disabilities, and seniors.
Certainly, there is wide divergence in how far countries approached their regulation of AI – with China and the United States (US) dominating the global AI market and are arguably the furthest along in regulation. Both acknowledge the need to protect their citizens from the fast-evolving technologies while they position themselves geopolitically as AI innovation leaders. More than these two front runners, the European Union (EU) AI regulations, as with data protection, have put more focus on human rights.
While the US has followed an ad hoc, use-case approach to AI regulations through various state-level interventions, the Biden Executive Order issued last year changed that, producing one of the most significant AI policy documents at a national level—with significant global impact. The document does not provide any hard regulatory policies but has instructed various US agencies to develop specific regulations and non-regulatory interventions for their mandates, such as tackling emerging risks on civic rights.
Both Chinese and US regulations, while keen on minimising harm, have significant focus on encouraging innovation. For example, the Chinese regulations encourage the sharing of algorithmic formulas for future development; the Biden Executive Order encourages the development of labelling approaches that preserve the benefits of the technologies.
China has arguably the most comprehensive AI regulatory regime developed through three main regulatory policies targeting algorithmic recommendation systems, deep synthesis information services and generative AI.
While these Chinese regulations openly express the country’s intention to uphold socialist values and prioritise state stability, the EU’s AI Act prioritises human rights by taking a risk-based approach. It sets higher standards of regulations for AI with high risks and bans those categorised as unacceptable risks, such as emotional recognition and the use of AI for social scoring.
Information integrity is emerging as part of a common agenda. In both the United Kingdom (UK) and China, platforms are given a duty to moderate content and ensure it is free from unlawful and harmful content. The onus is on the platform companies to devise moderation mechanisms and share them with regulators. These content moderation mechanisms are to be shared with the authorities and made transparent to the public.
All these regulations will have a global effect as they target both AI systems being used, deployed, and marketed in their respective jurisdictions. They will, therefore, need international cooperation as well as financial investment for effective policy implementation and regulations.
The positions of international groups and coalitions on AI — groups of which many of these countries are prominent members — are developed consensually around what is emerging as the general principles of governance or priority outcomes. The G20 stresses the principles of AI development as being human-centred, sustainable, and transparent, while the G7 principles include the rule of law, democratic values, and keeping humankind at the centre.
Equity and justice missing
All these principles are good, but what is significantly missing is a language and focus on development, and more specifically, equity and justice – a reflection perhaps of the lack of participation of the majority world in many of the global processes despite the emphasis on inclusion and multistakeholderism. There is the language of inequality in relation to risk and opportunity, but there is no evidence of how this will be solved or any real commitment to subverting current interests or economic regulation to redress injustices that are currently being perpetuated by AI or that would enable the more equitable pursuit of opportunities. While the Tech Envoy’s AI Advisory Panel has some eminent African AI practitioners and scholars, African voices are largely absent from the submissions and consultation processes.
However, Africa has independently been developing a regional position with the AUDA-NEPAD, which recently released a white paper and roadmap on AI, a blueprint from which Africa’s AI strategy will be developed. The document takes an economic approach focusing on creating an economic environment for AI use and deployment in the continent, data governance, skills development and partnerships. The document notes that African countries need to develop national AI and its related policies, but it doesn’t come out strongly on specific and sectoral regulations.
While the future of AI regulation is being decided in terms of global governance, the risks associated with these technologies are with us today. They will require global collaboration and collective action to prevent the rapid evolution of these technologies from exacerbating existing inequalities. Despite global competition in the regulation of AI from all fronts, the convergence of regulatory approaches and their intentions are perhaps an indicator that a common approach to cooperation is possible.