fa0775491ebf484f4b2e6e5614610089.jpeg

Social Media Platforms Like TikTok and Snapchat Join Forces to Combat AI-Generated Child Abuse Content

A coalition comprising major social media platforms, AI developers, governments, and NGOs has united to address the issue of abusive content generated by AI.

The policy statement, released on 30 October, encompasses 27 signatories, among them the governments of the United States, Australia, Korea, Germany, and Italy, in addition to social media platforms such as Snapchat, TikTok, and OnlyFan

The published policy paper stated:

Artificial Intelligence (AI) presents enormous opportunities to help tackle the threat of online child sexual abuse. It has the potential to transform and enhance the ability of industry and law enforcement to detect child sexual abuse cases. To realise this, we affirm that we must develop AI in a way that is for the common good of protecting children from sexual abuse across all nations

The statement acknowledges the potential benefits of AI in countering online threats, especially concerning child sexual abuse.

However, it emphasises the need to be vigilant against malicious use, as AI can also be employed to produce harmful content.

As part of their commitment, the coalition has pledged to work collaboratively to understand and address the risks posed by AI in combating child sexual abuse.

Transparency and open dialogue among stakeholders are encouraged, along with the establishment of effective policies at the national level.

We resolve to work together to ensure that we utilise responsible AI for tackling the threat of child sexual abuse and commit to continue to work collaboratively to ensure the risks posed by AI to tackling child sexual abuse do not become insurmountable. We will seek to understand and, as appropriate, act on the risks arising from AI to tackling child sexual abuse through existing fora. All actors have a role to play in ensuring the safety of children from the risks of frontier AI.

Concerns over Children in the AI Era

The United Kingdom referenced data from the Internet Watch Foundation (IWF), revealing that users within a dark web forum had circulated close to 3,000 images of AI-generated child sexual abuse material.

In light of the upcoming global summit on AI safety in the U.K., the release of this statement underscores the growing concern over child safety issues in the context of advancing AI technology.

Recently, the U.S. has also taken action, with several states filing a lawsuit against Meta, the parent company of Facebook and Instagram, citing similar concerns regarding child safety.

* Original content written by Coinlive. Coinbold is licensed to distribute this content by Coinlive.

Coinlive
Coinlive
Coinlive is a media company that focuses on Making Blockchain Simpler for everyone. We cover exclusive interviews, host events, and feature original articles on our platforms