Bitcoin

Meta counters fake news created by AI with ‘invisible watermark’

Social media giant Meta, formerly known as Facebook, will include an invisible watermark on all images it creates using artificial intelligence (AI), stepping up measures to prevent misuse of the technology.

In a December 6 report detailing updates to Meta AI, Meta’s virtual assistant, the company said it would soon add invisible watermarking to all AI-generated images created with “Imagine with the Meta AI Experience.” Like many other AI chatbots, Meta AI generates images and content based on user prompts. However, Meta aims to prevent malicious actors from seeing the service as just another tool to defraud the public.

Like many other AI image generators, Meta AI generates images and content based on user prompts. Modern watermark features make it more difficult for creators to remove watermarks.

“In the coming weeks, we will be adding invisible watermarking to images through the Meta AI experience to increase transparency and traceability.”

Meta said it will use deep learning models to apply watermarks to images created by AI tools that are invisible to the human eye. However, the model can also detect invisible watermarks.

Unlike traditional watermarks, Meta AI claims that its AI watermark, called Imagine with Meta AI, is “resilient to common image manipulations such as cropping, color changes (brightness, contrast, etc.), screenshots, etc.” The watermarking service will initially be applied only to images created through meta AI, but we plan to apply this function to other meta services that utilize AI generated images.

In its latest update, Meta AI also introduced a “reimagine” feature for Facebook Messenger and Instagram. The update allows users to send and receive AI-generated images with each other. As a result, both messaging services will also receive an invisible watermark feature.

Related: Tom Hanks, MrBeast and Other Celebrities Warn About AI Deep Fake Scams

AI services like Dall-E and Midjourney allow you to add existing watermarks to content that is already being created. However, these watermarks can be removed simply by cropping the edges of the image. Moreover, certain AI tools can automatically remove watermarks from images, which Meta AI claims is impossible to do on printouts.

Since generative AI tools became mainstream, numerous entrepreneurs and celebrities have come forward alleging AI-based fraud campaigns. Scammers use readily available tools to create fake videos, audio and images of popular people and spread them across the Internet.

Last May, an AI-generated image showing an explosion near the Pentagon, the headquarters of the U.S. Department of Defense, caused a brief drop in the stock market.

Fake images like the one above were later picked up and circulated by other media outlets, creating a snowball effect. However, local authorities, including the U.S. Department of Defense Protection Agency, which is in charge of building security, confirmed that they were aware of the disseminated report and that “no explosion or accident occurred.”

That same month, human rights advocacy group Amnesty International fell in love with AI-generated images depicting police brutality and used them to campaign against authorities.

Image generated by Amnesty International with AI. source: Twitter

“We removed the image from our social media posts because we do not want criticism of our use of AI-generated images to distract from our core message of supporting victims and demanding justice in Colombia,” said Erica Guevara. Rosas, Amnesty Americas Director.

magazine: Proposal to regulate cryptocurrencies in the U.S. amid fears and doubts from lawmakers