As photorealistic AI-generated photos and videos proliferate online, tech companies and watchdog groups are racing to develop tools to identify fake content.
Watermarking computer-generated images is a commonly proposed solution, adding an invisible flag in the form of hidden metadata that helps reveal that the image was created using a generative AI tool. But researchers discovered that this watermarking had one major flaw. This means that it can be easily eliminated using adversarial techniques.
Now, major camera manufacturers are proposing the exact opposite approach: placing watermarks on “real” photos.
Nikon, Sony and Canon recently announced a joint initiative to include digital signatures on images taken directly from high-end mirrorless cameras. According to Nikkei Asia, the signature cryptographically authenticates the digital provenance of each photo by incorporating key metadata such as date, time, GPS location, and photographer details.
Nikon says it will launch this feature in its upcoming lineup of professional mirrorless cameras. Sony is currently planning to issue a firmware update to insert digital signatures into its mirrorless cameras. Canon plans to introduce a camera with built-in authentication in 2024, and will also offer video watermarking later that year.
According to Nikkei, the goal is to provide photojournalists, media professionals and artists with irrefutable evidence of the authenticity of their images. The tamper-evident signature does not disappear after editing and helps prevent misinformation and fraudulent use of your photos online.
To support this, the two companies collaborated to develop an open standard for interoperable digital signatures called “Verify.” Once installed, photos taken with the appropriate hardware can be viewed online for free, allowing people to verify their authenticity.
If an AI-generated photo attempts to pass the verification system without a physical signature, it will be tagged as “No Content Credentials.”
Instead of retroactively displaying AI content, we authenticate real photos directly from the original. However, like other watermarking systems, its success depends on widespread adoption (more hardware manufacturers incorporate the standard) and timely implementation (the code evolves to make it unhackable).
like decryption Reportedly, recent studies have shown that anti-watermarking techniques may compromise embedded signatures, rendering current watermarking methods useless. However, this only makes watermarked images unwatermarked, so people have fewer tools to detect artifacts in images.
Anti-watermarking technologies can potentially remove authenticity signatures from real photos, but this is less problematic than removing watermarks from AI-generated fake images. why? Removing the watermark from AI deepfakes makes it easier to pass off fake content as real. However, if the authentication signature is hacked on the actual photo, the remaining image will still be captured by the camera. This is not generated from a generative model. Even if the encryption evidence is lost, the underlying content is still real.
In this case, the main risk is related to attribution and rights management, not the veracity of the content. An image may be poorly sourced or used without proper licensing, but it does not inherently mislead the viewer about the reality it represents.
OpenAI recently announced an AI-based deepfake detector that claims to have 99% image accuracy. However, AI detectors are still imperfect and face ongoing upgrades to keep pace with advances in generative technology.
The recent surge in deepfake sophistication has indeed highlighted the need for such a strategy. As we have seen in 2023, the need to distinguish between real and manipulated content has become more critical than ever. As politicians and technology developers alike struggle to find workable solutions, we appreciate a little help from these companies.
Edited by Ryan Ozawa.