Ethereum

Twitter touts ‘seamless’ blocking of child abuse content as Elon Musk steps up EU scrutiny

Social media giant Twitter said a new system to prevent the spread of child sexual abuse material (CSAM) on its platform had been “seamlessly deployed” as it tested technology developed by non-profit organization Thorn.

The Twitter Safety account announced Tuesday that it had participated in a beta test of the group’s AI-based Safer solution to proactively detect, remove and report text-based material containing child sexual exploitation.

“Through our ongoing partnership with Thorn, we are going the extra mile to create a safe platform,” the Twitter Safety account wrote. “This work builds on our ongoing efforts to eradicate child sexual exploitation online, with the specific goal of expanding our capacity to combat harmful content that puts children at imminent risk.”

“This self-hosted solution deployed seamlessly into our detection mechanisms, allowing us to focus our investigation on high-risk accounts,” he continued.

Founded in 2022 by actors Demi Moore and Ashton Kutcher, Thorn develops tools and resources focused on protecting children from sexual abuse and exploitation. Last April, Google, Meta, and OpenAI signed a pledge published by Thorn and fellow nonprofit All Tech is Human, pledging to enforce guardrails for AI models.

“We learned a lot from beta testing,” said Rebecca Portnoff, vice president of data science at Thorn. decoding. “We knew that child sexual abuse appears in all types of content, including text, but this beta test specifically showed how machine learning/AI for text can impact real life at scale.”

As Portnoff explained, the Safer AI model consists of a language model trained on child safety-related text and a classification system that generates multi-label predictions for text sequences. Prediction scores range from 0 to 1, which indicates the model’s confidence in the relevance of the text to various child safety categories.

Portnoff could not disclose which other social media platforms are participating in the beta testing of the Safer suite, but said the response from other companies has been positive.

“Several of our partners have shared that this model is particularly useful for identifying harmful child sexual abuse activity, prioritizing reported messages, and supporting investigations of known bad actors,” Portnoff said.

The proliferation of generative AI tools following the launch of ChatGPT in 2022 has led internet watchdogs such as the UK-based Internet Watch Foundation to sound the alarm about a flood of AI-generated child pornography being distributed on dark web forums. It can overwhelm the Internet.

The announcement from Twitter’s safety team came hours before the European Union demanded the company explain reports of “reduced content moderation resources.”

Elon Musk’s cost-cutting measures have resulted in a nearly 20% reduction in the size of the platform’s content moderation team after October 2023, and a reduction in the number of monitored languages ​​from 11 to seven, according to Twitter’s latest transparency report to EU regulators.

“The Commission is also seeking additional details on risk assessment and mitigation measures related to the impact of generative AI tools on electoral processes, dissemination of illegal content, and protection of fundamental rights,” it demanded.

The EU filed formal proceedings against Twitter in December 2023 over concerns that it had breached the Digital Services Act across several areas, including risk management, content moderation, ‘dark patterns’ and access to data for researchers.

The committee said Twitter must provide the requested information by May 17 and resolve additional questions by May 27.

Edited by Ryan Ozawa.

Stay up to date with cryptocurrency news and receive daily updates in your inbox.

Related Articles

Back to top button