Ethereum

AI undressing: Deepfake nude services surge in popularity

The scourge of malicious deepfake creation has spread far beyond the realm of celebrities and public figures, and a new report on Non-Consensual Intimate Images (NCII) finds the practice is only growing as image generators develop and proliferate. I did.

A report from social media analytics firm Graphika on Friday said “AI stripping” is on the rise, describing the practice of using fine-tuned generative AI tools to remove clothing from images uploaded by users.

The gaming and Twitch streaming communities grappled with this issue earlier this year when prominent broadcaster Brandon ‘Atrioc’ Ewing accidentally revealed that he had been watching AI-generated deepfake porn from a female streamer he called a friend. my city.

Ewing returned to the platform in March, contrite and reporting on the weeks of work he had done to mitigate the damage he had caused. But this incident opened the floodgates for the entire online community.

According to Graphika’s report, this was just a one-off incident.

“We used data provided by Meltwater to measure the number of comments and posts on Reddit and I wrote: “Compared to 32,100 so far this year, there will be a total of 1,280 in 2022, representing a 2,408% increase over the previous year.”

New York-based Graphika said NCII’s explosion shows the tool has moved from a niche discussion board to a small industry.

“This model will enable more providers to easily and inexpensively create photorealistic NCII at scale,” Graphika said. “Without such a provider, customers would have to host, maintain and run their own custom image spreading models – a time-consuming and sometimes expensive process.”

Graphica warns that the growing popularity of AI stripping tools could lead to the creation of targeted harassment, sexual exploitation and child sexual abuse material (CSAM), as well as fake pornography.

According to a Graphika report, developers of AI stripping tools advertise on social media to drive potential users to websites, private Telegram chats or Discord servers where they can find the tools.

“Some providers are being very public about their activities, saying they offer ‘undressing’ services and posting photos of people they claim to have ‘undressed’ as evidence,” Graphika wrote. “Others are less explicit and introduce themselves as an AI art service or Web3 photo gallery, while including key terms related to synthetic NCII in their profiles and posts.”

While AI that takes off clothes usually focuses on photos, the AI ​​is similar to the YouTube celebrity Mr. It has also been used to create video deepfakes using likenesses of celebrities including the Beast and iconic Hollywood actor Tom Hanks.

Some actors, such as Scarlett Johansson and Indian actor Anil Kapoor, are taking to the legal system to combat the ongoing threat of AI deepfakes. But while mainstream celebrities receive more media attention, adult celebrities say their voices are rarely heard.

“It’s really difficult,” says Tanya Tate, legendary adult performer and Star Factory public relations director. decryption earlier. “I’m sure it would be a lot easier if someone was in the mainstream.”

Tate explained that even if AI and deepfake technology do not advance, social media is already full of fake accounts using their potential and content. What isn’t helping matters is the ongoing stigma that sex workers face, forcing them and their fans to remain in the shadows.

Last October, the Internet Watch Foundation (IWF), a UK-based internet watchdog, noted in a separate report that more than 20,254 child abuse images were found on a single dark web forum in just one month. The IWF has warned that AI-generated child pornography could “overwhelm” the internet.

The IWF warned that thanks to advances in generative AI imaging, deepfake porn has evolved to the point where telling the difference between AI-generated images and real ones is becoming increasingly complicated and law enforcement agencies are left chasing online ghosts instead of actual victims of abuse.

“So there’s this ongoing issue of not being able to believe that things are real or not,” said Dan Sexton, CTO of the Internet Watch Foundation. decryptiontea. “You can’t trust anything that tells you whether it’s true or not because it’s not 100% true.”

As for Ewing, Kotaku reported the streamer’s return since his January transgression, saying he has been working with reporters, technologists, researchers and women affected by the incident. Ewing also said she sent funds to Ryan Morrison’s Los Angeles-based law firm, Morrison Cooper, to provide legal services to any woman on Twitch who needs help sending takedown notices to sites that post images of herself.

Ewing added that he received research on the depth of the deepfake problem from mysterious deepfake researcher Genevieve Oh.

“I tried to find the ‘bright spots’ in the fight against this type of content,” Ewing said.

Edited by Ryan Ozawa.

Stay up to date with cryptocurrency news and receive daily updates in your inbox.

Related Articles

Back to top button