Children and AI: The challenge of protecting children’s rights


RIA researcher, Alex Comninos, participated in a “Children and Artificial Intelligence (AI) in Africa” workshop hosted by UNICEF and supported by the Government of Finland on the 13th to 14th February 2019. Comninos contributed to the event by delivering a presentation on how people access and use the internet in Africa based on RIA’s After Access-Assessing Digital Inequality in Africa survey, specifically highlighting constraints to children’s access.

The workshop formed part of a consultation process that UNICEF has been involved in since 2019, with the aim of producing policy guidelines for children and AI. Comninos highlighted two issues, content blocking and surveillance, that were discussed at the workshop.

Content Blocking

According to Comninos, content blocking or filtering for children commonly emerges as a solution because schools and perhaps society, as a whole, believes that this is what children need. In fact, in South Africa, the Law Reform Commission (SALRC) proposed a bill that would have made it illegal to issue devices without blocking “turned on” for children. The SALRC didn’t specifically use the word “AI”, but it stated that it would be addressed through a Bill implementing blocking of pornography all devices in order to protect children.

However, as Comninos pointed out, the problem with the content blocking approach, which is pre-AI, is that it is inaccurate because it looks for keywords. It becomes particularly problematic when a child, looking for advice, for example, on sex education or abuse, may get blocked from accessing important information.

Most children are abused by someone they know and would find it useful if they were empowered to use an anonymous platform like the internet or another safe space to discuss it or find advice about it. In this instance, a web filter could be overzealous and this is not optimal for protecting children.

At the same time, it is also very hard to block domains. In this regard, AI offers the promise of image recognition and you can block content based on what the AI detects. The risk, however, is that the AI has the values of society and not the child’s, and thus, is not protecting the entire plethora of children’s rights.

Children’s rights are an important issue, which must be developed and “fleshed out” for the digital age. A child has the right to seek help when being abused as well as the right to sexual education.

The same issues that would apply to filtering and blocking would apply to content moderation. So content has to be moderated on platforms to make sure that it is appropriate for children. There are never enough human moderators. To protect children, human moderators also have to look at things such as child pornography, which has intense psychological toll. So, it’s obvious that people have said, maybe AI can solve this.

Comninos contends, “It’s a big risk that people think that AI is a quick fix for child protection.”


The second issue he raised is surveillance. Personalised learning is a promise of AI. In this case, the child can be profiled in terms of what the AI thinks their emotions might be, what they think their results might be, what they think their academic trajectory might be – and deliver personalised content to the child. It augments lack of capacity and resources in teaching. However, it is also not a good idea to paper over that lack of capacity by attempting to augment the lack of skilled teachers with a computer. A child still needs a proper education interfacing with a human being.

In addition, a lot of these systems for personalised education as well as for education management are retailed by big tech companies, who are actually also using data for other purposes. For example, in the UK a company that was producing education software in public schools sold information on about the children to a gambling company.

RIA’s Contribution to the Discussion

Comninos’ input focused on the access and usage constraints children face in relation to internet access. Access and usage are a major constraint to children benefiting from AI. The amount of children that are using the Internet are much less than the operators claim. The primary means of access has changed in the past few years. Usage has become very mobile phone heavy. A laptop and high speed internet connection is the best platform for creating content. For example, not many people produce CV’s on their mobile phones or code on a mobile phone. The issue is the quality of the access. It’s the actual lens that the mobile phone gives you into the interface of creating content.

There is also a gender dynamic to access for children. Often it is the man of the household that controls access to devices and that filters down to children who may wish to use the device.

According to Comninos, access is not fast enough and deep enough, and this is a huge constraint.

Read more

Read RIA Researchers Alex Comninos and Anri van der Spuy’s views on content moderation in the Government Briefing Book on Emerging Technologies (Volume 1) and their views on the SALRC’s proposed content blocking strategy in RIA’s submission on the SALRC’S discussion paper on sexual offences.