How Should AI be Regulated in the Public Interest?

Much has been written on the connections of generative language models with other social issues while highlighting some possible consequences. We continue with this theme of connections and consequences, using it to think not only about issues of public interest governance, but issues of equal belonging. 

From the perspective of South African intellectual life – social and legal traditions that are especially attentive to the logics of segregation and stratification – how can we maximise the potential of artificial intelligence (AI) technologies so that the goods benefit all citizens and persons living in South Africa? Furthermore, how can we use these intellectual resources to govern the technical realm? This is a broad agenda that requires many contributions from all parts of the South African policy research community.In early 2023 there was a credible report of free AI-generated voice tools being used to break into bank accounts. Some European and American banks use voice ID as a secure way to log into accounts. Similarly, YouTube is flush with channels showcasing how ChatGPT can be paired with Deepfakes. “I strongly suspect there will soon be a deluge of Deepfake videos, images, and audio, and unfortunately many of them will be in the context of scams”, says Noah Giansiracusa, an Associate Professor of Mathematics and Data Science at Bentley University.

So obviously ChatGPT and associated technologies will alter the technical standards for trusted systems.

But there are also the sociological elements to trust. And as we sadly know all too well from the South African experience, collective trust is very hard to sustain, let alone make, when social structures are predicated upon reproducing social inequalities. Consider how futurists are arguing that ChatGPT can be used to provide cheap legal, education and medical advice for the poor. These people point to Africa’s lamentable 0.2 doctors for every 1 000 people and say that AI can help increase access to healthcare. 

But how likely is it that social protections will trump profit maximisation when, for example, in February 2023 Bain announced a partnership with OpenAI, the developer of ChatGPT? This turn of events prompted Meredith Whittaker, the President of the Signal Foundation, to declare “And just like that, the hypothetical socially beneficial sci-fi use cases, so central to AI boosters’ marketing, dissolve into air, leaving only the imperatives of capital”. Are these conditions in which we can trust that all can equally belong?

These issues of belonging are not just confined to the Global North. It also continues longstanding Global South marginalisation from equitable participation in the design and implementation decisions about these technologies. These are issues that affect AI that need to be thought about to have a meaningful global AI ethics.

Social Identity and Data Governance Issues

Since its release in November 2022, ChatGPT has been flagged as perpetuating misogynistic and racist acts. One reason is because AI language models cannot distinguish between “proof-backed information and fiction,” says Simon Greenman, Co-Founder, Partner and CTO of Best Practice AI. He adds that “the fear that we have, from a societal perspective, is that it can amplify toxic content, racism, violence, misogyny, hate speech and political theories that are incorrect and biased”. AI experts have identified that ChatGPT poses two main issues for concern, these include data quality and systemic discrimination through data.

As Sarah Noble demonstrates in her 2018 book, Algorithms of Oppression, new technologies encode not only explicit hegemonic social attitudes, but the implicit logic of the society in which these technologies are produced. This is how Google search images of Black girls tend to be discriminatory, for example. If representations of Black people in the United States tends to be anti-black, what reason would Americans have for thinking that a technology like ChatGPT would not reflect the same racist stigma? And as Noble intimates, after Mark Zuckerberg declares that good programmers “move fast and break things”, women of color typically have to do the critique after the fact; they do the proverbial ‘clean up work’.

One salient question about AI is who sets the gaze and who does this gaze serve? Oftentimes the code of these AI systems are imperceptible, let alone imperceptible about how institutional racism shapes that code and what is intended to do. But imperceptibility does not mean that there are no effects.

Therefore, we need to be thoughtful about how we use these systems and how they are connected with the politics of who belongs, and how. 

Research ICT Africa has a collective mission to understand the impact and uses of AI in Africa. Funded by the IDRC and SIDA, we are an AI policy research centre. Some of our work includes ensuring AI development is carried out in accordance with ethical systems, which Rachel Adams is working on. Scott Timcke is working on democratic infrastructures through an economic rights-respecting framework. Hanani Hlomani and Andrew Rens are working on copyright and generative AI models.

Institutional and Policy Differences

There is is value in thinking at greater length about the institutional and social setting of AI products. Consider how the magnitude of labour’s displacement by automation, for instance, is less important than the fact that this displacement occurs without many of the countervailing social protections that labour enjoyed 60 years ago. They include a long decline in union membership and bargaining power as well as the retreat of basic labour standards and their enforcement across the world.These lost social protections also require that we discuss accumulated disadvantages as well as the institutional structures created to sustain their potency. To summarise the empirical findings, intergenerational wealth ensures that “poor children who do everything right do not do better than rich children who do everything wrong,” as the economist Matt O’Brien finds.

Effectively, Silicon Valley’s shareholders control the inescapable foundation of the contemporary economy. This position gives them power to shape public discourse, which in turn has other downstream effects. This is how we arrive at the point where the rich will have easy access to human lawyers while the poor are supposed to be grateful for an AI-Chatbot to represent them in court.

Conclusion and Prompts

No technological product is neutral. Yes, products have inherent properties, but these properties are overdetermined by the social setting and purposes. To put it differently, products are put towards projects. And so what projects do we want AI to advance? For us it looks like an effective democratic state that is willing and able to intervene in business operations. It looks like one willing and able to tax windfalls. It looks like one willing and able to provide social protections. These are the starting points for us to materially demonstrate meaningful equal belonging.

This blog is based on a presentation that was made at the Media Futures Seminar Series held at Stellenbosch University (SU) on 27 February 2023. The title of this particular session was ‘The Future of Intelligence: ChatGPT and its Implications’. The seminar was hosted by the SU Department of Journalism and the Faculty of Science and Social Sciences.