NEWSINVESTIGATORS

Proliferation Of Sexualised Images Of Youngsters By AI Alarming, UNICEF Warns

News Investigators/ The UN Children’s Fund (UNICEF) says new evidence reveals a proliferation of sexualised images of youngsters generated by artificial intelligence (AI) and a dearth of laws to stop it.

UNICEF said no fewer than 1.2 million youngsters had disclosed having had their images manipulated into sexually explicit deepfakes in the past year, according to a study across 11 countries.

The “new evidence confirms the scale of this fast-growing threat”, UNICEF said, in a study in collaboration with INTERPOL and the ECPAT global network working to end the sexual exploitation of children worldwide.

“UNICEF is increasingly alarmed by reports of a rapid rise in the volume of AI-generated sexualised images circulating, including cases where photographs of children have been manipulated and sexualised.

“Deepfakes – images, videos, or audio generated or manipulated with Artificial Intelligence (AI) designed to look real – are increasingly being used to produce sexualised content involving children”, a statement on the study read.

“This includes through `nudification’, where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualised images”.

UNICEF said in some countries, this represented one in 25 children, the equivalent of one child in a typical classroom, adding: “Children themselves are deeply aware of this risk”.

In some of the study countries, up to two-thirds of children said they worried that AI could be used to create fake sexual images or videos.

The UN agency said levels of concern varied widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures.

“We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM).

“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.

“When a child’s image or identity is used, that child is directly victimised,” UNICEF warned.

“Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help.”

The UN children’s agency strongly welcomed the efforts of those AI developers who were implementing safety-by-design approaches and robust guardrails to prevent misuse of their systems.

However, it said the response so far was patchy, and too many AI models were not being developed with adequate safeguards.

The risks could be compounded when generative AI tools were embedded directly into social media platforms where manipulated images spread rapidly.

To address this fast-growing threat, the UN agency issued Guidance on AI and Children 3.0 with recommendations for policies and systems that would uphold child rights, and called for immediate action to confront the escalating threat.

It said governments needed to expand definitions of child sexual abuse material to include AI-generated content and criminalise its creation, procurement, possession and distribution.

“AI developers should implement safety-by-design approaches and robust guardrails to prevent misuse of AI models, UNICEF stressed.

“Digital companies should prevent the circulation of AI-generated child sexual abuse material, not merely remove it, and strengthen content moderation with investment in detection technologies”, it added.

NAN

Exit mobile version