
Introduction
Online violence or violence mediated with different technologies is rapidly increasing, becoming more ruthless. Studies indicate that up to 58% of women and girls have been targeted online. In the Netherlands, the National Rapporteur has found that, for the period 2020-2024, sexual violence against children has increased, and Artificial Intelligence (AI) has a role in that. By way of example, recently it was discovered that Grok, the AI-based chatbot of the social media platform X, could, at the request of a user, remove clothing from images of pictures (including minors) without their knowledge and consent.
The broad term of cyberviolence includes, for the purpose of this blog, several types of harmful and abusive behaviours such as image-based abuse, extortion, harassment, doxxing and cyberstalking. While technology-facilitated violence is not a new phenomenon, the emerging use of AI has an immense impact on the scale, efficiency, speed, and means to commit cyberviolence.
On 25 November 2025, the ‘Artificial Intelligence and Cyberviolence’ workshop organized by Dr. Irene Kamara, from the Tilburg Institute for Law, Technology, and Society, with the support of the Koninklijke Nederlandse Akademie van Wetenschappen (KNAW) in Amsterdam, brought together speakers from law enforcement, academia, and legal practice to examine how artificial intelligence and digital technologies are affecting sexual violence and exploitation, particularly in relation to children and women. The speakers – with backgrounds in law, social sciences, data science, children’s rights, and psychology – raised a broad range of themes, including AI-generated child sexual abuse material (CSAM), platform responsibility and prevention, criminal law responses, and the protection of victims within the EU and international legal framework.
AI Risks
When a new technology emerges, especially one with a disruptive potential such as AI, one searches for guardrails in the law, and regulation more generally. However, current harm-based legal models are challenged in a context where AI tools generate manipulated images without physical contact or the direct involvement and consent of the portrayed person. The current EU approach is largely outcome-focused, in that it regulates the production, distribution, dissemination of AI-generated CSAM once it exists, rather than addressing the tools, systems, or design choices that enable its production. Furthermore, definitional inadequacies in several instruments and international frameworks lead to broader shortcomings of existing legal approaches (e.g. pornographic material is defined as ‘visually depicted’ sexually explicit conduct in the Council of Europe Cybercrime Convention, while reality shows that such material could also be in an audio form). Live-streamed abuse is an example of an abusive behaviour that blurs the distinction between preparation and execution, rendering traditional offence structures increasingly inadequate. Legislative efforts to criminalise preparatory and facilitative conducts are attempts to better reflect and capture how abuse unfolds in digital environments. Existing instruments such as the Lanzarote Convention, while addressing parts of the problem such as certain definitional issues (e.g. including simulated sexually explicit conduct under art. 20 on ‘child pornography’, face issues of enforcement across parties to the conventions.
Regulating AI and Cyberviolence
EU legislation on child sexual abuse is currently under reform in order for it to take into account cases of image-based sexual abuse that consists of synthetically fabricated images (deepfakes). Furthermore, the AI Act and the Gender Violence Directive (EU) 2024/1385 create obligations for providers and deployers of AI systems for transparency over the interaction with an AI system, risk mitigation, victim protection, and the criminalisation of behaviours like cyberflashing and non-consensual intimate image sharing. However, regulatory attention tends to focus on large platforms, often leaving the risks stemming from the activity of smaller services insufficiently addressed. The Directive 2024/1385 is in fact an example of a technology-aware legal instrument, considering the impact of AI on perpetrating abuse against women and girls, by punishing non-consensual intimate image-based abuse, including where “the material appreciably resembles an existing person, objects, places or other entities or events, depicts the sexual activities of a person, and would falsely appear to other persons to be authentic or truthful” (Recital 19).
From victims’ perspective, the issue of delayed disclosure, meaning that victims of abuse disclose the abusive behaviour several years later or after they become adults, is a significant one. Plans to extend statutes of limitation to several years after the victim reaches the age of maturity, seem appropriate to the harm and suffering caused to victims.
In this context, prevention and structural safeguards such as risk assessments by online platforms and AI system providers, mandated by the law, have an important role to play. Online sexual abuse, and violence more generally, often emerges at the intersection of content, contact, and conduct risks, which are shaped by platform design and governance. Rather than excluding children from digital services, age-appropriate design, proportional safeguards, embedding privacy, safety, and security into design – supported by effective reporting mechanisms and accessible tools for guardians – are important for prevention and governance.
Law on the Ground: Lessons from Practice
After going in depth into the question of whether laws and regulatory frameworks are fit for purpose and how legislation should be approaching the phenomenon of AI-facilitated violence, the discussion highlighted that, in parallel to legislative scrutiny, practical obstacles and good practices should be explored. From a law enforcement perspective, while AI has not created entirely new forms of CSAM, it has significantly amplified existing harms, and created additional obstacles for criminal investigations, when used by offenders. In addition, much of the activity is taking place on the dark web, a hidden part of the Internet, not indexed by search engines. Enforcement of existing rules seems to be uneven due to the diverse capacity of states to implement these rules effectively. Moreover, in terms of evidence analysed with the support of AI tools, concerns of explainability, bias, human oversight are raised. In the absence of specific frameworks on AI and evidence, the regular criminal law evidentiary standards apply, such as relevance, reliability, and chain of custody. At the same time however, AI-based tools facilitate and accelerate the process when used by law enforcement (e.g. tattoo recognition based on AI-fabricated data). Providers of the infrastructure surrounding the production, dissemination and consumption of CSAM are at the core of this debate, in particular as regards the allocation of responsibility between platforms, law enforcement authorities, and, increasingly, developers of AI systems. Legal assistance, in particular, is a mechanism through which private actors are required to support CSAM investigations. At EU level, assistance obligations are embedded across multiple instruments relevant to CSAM, including the e-Evidence Regulation, and most recently the proposed CSA Regulation. Legal assistance obligations must be assessed through a proportionality lens, taking into account not only the rights of victims and children, but also Article 16 of the EU Charter of Fundamental Rights, which protects the freedom to conduct a business. While not absolute, this right remains relevant when evaluating the cumulative burden imposed on platforms. Instruments such as the Digital Services Act (DSA) increasingly require platforms to integrate CSAM-related safeguards directly into product architecture, which include risk assessments, age-verification mechanisms, internal monitoring processes, transparency duties, and user-reporting tools.
Key takeaways
a. AI is no longer at the periphery of violence
AI increasingly structures the forms of violence, accelerates its scale, and complicates our ability to identify, prevent, and respond to harm. In fact, it can be said that violence is coded into platforms and information and communication systems. Whether through sexual deepfakes, grooming chatbots, AI-generated CSAM, avatar-mediated exploitation, or algorithmic tools that enable manipulation and coercion, it is becoming clear that traditional frameworks for understanding violence and protecting people are being challenged by the availability and use of Artificial Intelligence.
b. AI as an amplifier of violence
AI has intensified gender-based violence and violence against vulnerable groups by making abuse easier to generate, scale, and monetize. Algorithmic systems replicate existing societal biases and amplify them through recommender systems, chatbots, sexbots, and virtual environments. Harms differ significantly across age groups, meaning that victims (particularly children and adolescents) require tailored protection rather than uniform solutions. The rise of avatar-mediated abuse in AI-driven immersive worlds challenges our understanding of embodiment, autonomy, and consent. The expansion of AI-generated influencers challenges notions of identity, autonomy, and sexual privacy.
c. Operational realities and the significance of prevention
Law enforcement faces an increasingly complex operational reality. AI blurs the line between the real and the synthetic, complicates victim identification, and enables offenders to target children at scale and with sophistication. Investigators must distinguish between human victims and AI-generated targets, handle new forms of live-streaming abuse, and make difficult prioritisation decisions due to capacity constraints. Prevention, too, must evolve: the work of the European Commission’s Joint Research Centre underlines the importance of addressing perpetration pathways early, scientifically, and across jurisdictions. More emphasis should be placed on responsible AI use, not just the design of AI systems. While platforms may prohibit abusive behaviour through community standards, accountability for AI-facilitated sexual violence remains fragmented across developers, platforms, and users.
d. Victim-oriented approach
Scholarship highlights severe and lasting impacts on victims, including high levels of psychological harm, underreporting, and secondary victimisation or inadequate institutional responses. Harms are unevenly distributed; marginalised groups face greater risks and fewer avenues for support. Consent is not only a privacy issue but also a matter of dignity and moral harm. The role of first responders is essential; so there should be clear, evidence-based thresholds to guide intervention.
Overall, while AI is posing significant challenges for cyberviolence, effective protection of individuals requires aligning law, design-based regulation, enforcement, and victim support.
Bios
Dr. Irene Kamara works as Associate Professor at the Tilburg Institute for Law, Technology, and Society (TILT). She is also affiliated researcher at the Vrije Universiteit Brussel (VUB) and a qualified attorney-at-law.

Bilgehan Korucuoğlu is a privacy and data protection lawyer with experience in regulatory compliance and AI governance, and in advising leading tech companies on complex privacy issues.
