Two months ago in Christchurch, New Zealand, a white-supremacist entered a mosque wielding two sets of weapons. With the one, he shot and killed 51 people. With the other, a GoPro camera equipped with livestreaming software strapped to his head, he broadcast his attack in real time on Facebook and other social media.
The events of 15 March were designed to go viral. In the days before the attack, the terrorist used smaller platforms to upload his manifesto, and ten minutes before the attack started, he posted links to the document on platforms like Facebook, Twitter, and 8chan’s notorious “politically incorrect” channel. Footage of his 17-minute spree reverberated across the world, enabling countless users to follow him – or, given the gamer perspective enabling by using a head-mounted camera, be him – as he moved from one mosque to another.
Facebook’s overstretched moderator teams – which, among other things, bear the responsibility of deciding whether live content flagged by its users is harmful or not– removed 1.5 million copies of the video from the platform within the first 24 hours after the attack. Besides social media (where it was often inadvertently viewed as it automatically played on news feeds), the footage was embedded in many news reports around the world (for example, Media Monitoring Africa says some South African media showed excerpts of the video). At least 8,000 people who had seen the footage in New Zealand reportedly called mental health helplines in the country in the first ten days.
Two weeks after the attack, New Zealand’s government had already dealt with the terrorist’s first set of weapons by changing permissive gun laws in the country. And two months after the attack, the country’s Prime Minister, Jacinda Ardern, took a first step to dismantle the second weapon.
With French President Emmanuel Macron, Ardern invited world leaders, technology companies, civil society and academic stakeholders to Paris over the past week to discuss her proposal – dubbed the Christchurch Call– to encourage tech companies and governments to act against “terrorist and violent extremist content online”. The non-binding, voluntary Call has already been signed by 18 governments or countries (including the UK, Canada, and the European Commission) and eight major tech companies (including Facebook, Twitter, and Google, which owns YouTube).
While the Call might have laudable intentions, it is plagued by a plethora of procedural and substantive concerns. Many of these were raised during a closed meeting between Ardern and civil society and academic leaders on Tuesday. For several hours, she listened to the issues that make governing platforms and online contents particularly tricky and problematic from a human rights perspective. Examples include why content removal often proves impractical or can harm the Internet’s fundamental infrastructure; how technology and algorithmic curation are downgrading humans; and when there is an evidentiary need for archiving and cataloguing harmful online content rather than just deleting it (to prosecute war crimes and terrorism in Syria and elsewhere, for example).
Yet these are almost all problems that relate to the governance of platform harms as a whole, not just “terrorist and violent extremist content” on social media. And there seems to be growing global agreement that especially the digital giants like the GAFA (Google, Apple, Facebook, Amazon) have too much power and should no longer be left to mark their own homework (or self-regulate).
In only the past two months, for example, both the UK and France have respectively made legislative proposals for combatting platform harms through the establishment of new regulators and/or the imposition of a ‘duty of care’ on platforms. While the legitimacy and efficacy of these proposals are still contested from human rights and technical perspectives, platform harms do need to be addressed and the current, predominantly market-led approach does not seem to be working very well. But as Ardern said at the civil society meeting, “this is an incredibly complex area and our response can’t be to regulate immediately.”
Given that Senegal is the only African country to have supported the Call, how will it ricochet in Africa? With only about half of the continent connected to the Internet, most of our policymakers currently prioritise the positive potential of digital inclusion, albeit with disappointing results. Both policymakers and development practitioners tend to neglect that no country or user is isolated from the risks that accompany our growing dependence on the so-called “Fourth Industrial Revolution” (a problematic notion from the World Economic Forum which President Cyril Ramaphosa also seemingly supports).
The Internet as a ‘network of networks’ is only as strong as its most vulnerable link. To prevent online risks from manifesting as harms, our policymakers must deal with contextual problems like the development of institutional and human capacities for especially the vulnerable populations. For example, new users with lower literacy skills (an “offline”, demand-side problem) are likely to be more vulnerable to online risks as they tend to lack the necessary skills to know how to identify, compare, check, manage, report or otherwise respond to harmful content like the Christchurch footage.
While we must put in place relevant safeguards, fears about the poorly understood impact of platform harms should not become scapegoats to clamp down on potential dissent. We have already seen it elsewhere in the global South: after the recent Easter bombings in Sri Lanka, for example, the country’s government blocked some social media to allegedly quell disinformation. In Africa, the risk of platform harms have too often become justifications for shutting down the Internet or imposing a labyrinth of taxes and tariffs for the fundamental right to participate online (the United Nations recognises Internet access as a human right).
At the Paris meeting, Ardern repeatedly stressed that the Christchurch Call is only “a first step”. Let’s hope that it is a first step to enabling a broader discussion about platform harms without enabling overhasty governance responses. A broader discussion that will pay special attention to the users, communities and regions that are most susceptible to online risks. A broader discussion that will help us turn the Internet into a tool for good rather than another weapon for deepening inequalities both between individuals, communities and regions in our continent, and between Africa and the rest of the connected world.
This blog was first published by Daily Maverick.