Algorithms are increasingly being used to replace or augment human decision-making around the world, including here on the African continent. A key question facing policymakers is how to govern these technologies to ensure fair, accountable and transparent use.
What is Algorithmic Governance?
Algorithmic governance (AG) has emerged as a governance tool to mitigate the harmful impacts of artificial intelligence (AI)-related technologies that incorporate algorithmic tools, including big data, surveillance, and automated decision-making (AGRN). AG may also refer to automated data-processing systems that have been designed as tools to steer the governance process (“governance by algorithms”). Algorithms are, after all, a form of computerised control and governance.
Despite the gains made in identifying the risks and mitigation measures associated with algorithmic decision-making, progress made in ethical design and commitments to “responsible”, “beneficial” and “good” AI, the issue of how just the outcomes are remains a central governance challenge.
To date, much of the discussion on AG – and how best AI technologies can be governed to mitigate harm to individuals, societies, the environment, and other resources – has centred in the Global North. As we reckon with the increasing pervasiveness of AI in African societies, we need to consider African approaches to the regulation and governance of these new technologies to ensure our responses are locally relevant and preserve African values of collectivism
Risks of Algorithmic Use
While the use of algorithms and AI might mean more efficient processes (such as in Global Positioning System (GPS) systems or transport schedules) and the reduction of mundane tasks (such as with online search engines or factory production lines) the risks and harms that arise from these tools are a key cause of concern in society. As mentioned, one important area of disquiet is that algorithmic AI tools deployed in African nations lack contextual relevance, such as considering the infrastructural differences that countries face in comparison to more mature economies in the Global North.
There are also human rights concerns and other related harms from algorithmic use that have affected African people. These include algorithms that display racial and gender bias. Black South African women have been excluded from loan eligibility due to historically incorrect datasets, for example. There are also issues of mass surveillance and racial profiling (South Africa), digital echo chambers (Nigeria and Ethiopia) and digital censorship on social media (South Sudan, Chad and Congo) which causes political polarisation. There is also the problem of data colonialism which affects all nations but has particular relational asymmetries in data and information flows to the power centres in the Global North and from the broader periphery, including the Global South.
While some of these concerns cut across all societies, African experiences have also been affected by a history of colonialism, distrust in government, lack of data collection systems (and therefore critical data for evidence-based policymaking), and low levels of education and income (and therefore low levels of digital literacy). These factors are not relevant to the Global North and so are generally absent in AG discourses there. However, they are critical to the context for the development of AG appropriate for Africa.
These factors contribute to the complexity of AG in Africa, in addition to broader concerns about the transparency and accountability of algorithms, how data collected is used and if citizens have any legal remedies in the event of harms or violations occurring. On the face of it, the picture in Africa seems bleak. It suggests that the lack of good governance in algorithmic use perpetuates existing inequalities and undermines democratic values.
At the same time, the nature of algorithms depends largely on the type of data used. This makes the data used equally important to the question of the good governance of these technologies. Incorrect data, data that misrepresents a segment of the population, or data that simply does not include certain groups and renders them invisible are all key considerations for governance in better understanding the interaction between people and technology.
At Research ICT Africa, we are exploring these questions under the theme of “Data Justice”, focusing on addressing the structural inequalities and ethical concerns of data use throughout the whole AI lifecycle. This means looking beyond technical solutions and policy fixes to developing a longer-term and holistic approach to algorithmic and data governance that redresses the uneven distribution of both opportunities and harms between and within countries.
Importance of an African Perspective
Discussions on the Global South are often generalised and fail to consider the differential infrastructural, institutional and human rights concerns within its regions. Relying on insufficient research and data from other contexts will result in misguided and ineffective policies that are of limited benefit to Africans.
For that reason, it is important to look at ethical principles and value-based approaches that arise from distinctly African histories and value systems to build locally relevant and appropriate policy and governance solutions. This is important when much of the discussion around AG to date has centred on a principle-based approach in the form of ethical principles and standards on AI largely developed in the Global North. These African-centred approaches are being incorporated by RIA into ongoing research on more participatory data governance approaches, including greater incorporation of collective rights and community practices in data governance.
Traditional African governance frameworks make use of community-based approaches to governance. This allows communities to collectively determine governance priorities and solutions. This approach encourages positive citizen participation, especially for communities that have historically been marginalised from debates around AI governance, and potentially offers new insights to broaden AG. This might take the form of co-designing algorithmic tools with the communities that these technologies are meant to assist, or centre the experiences and feedback of community users – such as farmers for agricultural AI – in algorithmic impact assessments. AI policies can also better advance gender equality through digital literacy and the inclusion of more women in digital spaces.
There is also growing consideration of the protection of indigenous knowledge systems and cultural norms in the development of algorithms. AG that fosters the protection of these systems, through a community-based approach, could also lead to a better understanding from diasporic perspectives.
First-generation rights-preserving frameworks for data governance that focus on privacy and freedom of expression are also being expanded in our research on data justice. These are critical to the need to recognise second and third generation rights associated with socio-economic and environmental rights.
Overall, a diversity of perspectives on AG can help shape future developments, give various cultural groups control and agency over their data, and potentially mitigate the historical biases we are seeing arise in algorithmic AI systems. It also emphasises why African perspectives are valuable not just continentally, but globally. This provides an opportunity for the creation of digital technologies that address real-world concerns.
Way Forward?
“Good governance” might not solve all the problems associated with algorithm usage, but it is more effective when alternate governance approaches and a diversity of perspectives are included in these conversations. The African Union Commission has already begun building governance frameworks, focused on African perspectives and participation in digital (and data) governance, as part of the African Union’s Digital Transformation Strategy. As a result, RIA’s work supported the development of a high-level principle framework, the Africa Data Policy, which was adopted in February, but which is also contextually grounded. These measures, coupled with relevant approaches and values, help guide policymaking decisions that take proactive steps to counter some of the invidious risks associated with the adoption of advanced technologies designed for countries with very different levels of human development and legal and institutional endowments.
More research in Africa on community-based approaches should be used to inform AG policies on the continent that also seek to address the digital divide and related data justice issues. This means the inclusion of African perspectives. It is also understood that the immediate progress made on AG comes from the private sector, research hubs, and civil society; these sectors remain valuable knowledge hubs, with valuable contributions. The key issue would be what value is placed on the input from Africa, and how that will influence policymaking. AI and algorithms are here to stay, but governing the effects on society is where change matters.