This working paper examines the intersection of social media, generative AI (GenAI), and gender-based violence (GBV) online. It highlights urgent policy concerns around how opaque engagement markets and emerging AI technologies are enabling new forms of digital harm against women and, by extension, other similarly marginalised groups.
The key policy concerns identified include:
- Proliferation of non-consensual intimate imagery facilitated by accessible GenAI tools;
- Amplification of misogynistic content through engagement-driven algorithms;
- Privacy violations and exploitation of women’s online persona;
- Normalisation and trivialisation of online GBV.
RIA’s research, based on interviews with South African women, found that:
- Women experience frequent harassment, objectification and violations of consent online;
- GenAI is perceived as a significant threat that could exacerbate existing forms of online GBV;
- Cultural norms and intersecting identities shape women’s experiences of, and responses to, online abuse
- Self-protective measures like private accounts are common but insufficient;
- There is a concerning trend of desensitisation to online misogyny.
While not losing sight of the root causes of patriarchy, policymakers have room to develop regulations incentivising the swift removal of synthetic intimate media. Other steps to consider include implementing proactive detection systems for manipulated/synthetic content, and establishing or extending legal frameworks to provide victims of online GBV with avenues for recourse. There is also a role for platforms, where they can proactively adopt and enforce stronger protections against online GBV as and when it is expressed through synthetic content. This could include the demonetisation of any users who post synthetic GBV content.
This working paper argues that policymakers must recognise how GenAI fundamentally alters women’s media landscape. Urgent multi-stakeholder action is needed to develop regulatory guardrails, enhance platform governance, and operationalise AI ethics principles. By addressing both technical systems and underlying gender hierarchies, interventions can be better integrated with broader efforts to promote democracy, economic development and human rights.
This paper forms part of the multi-national research project, Resisting information disorder in the Global South, funded by the IDRC.