The administration is stepping up efforts to authenticate official communications from the White House following an artificial intelligence (AI) deepfake of President Joe Biden last month to deceive primary voters in New Hampshire.
As Pandora’s box of AI opens wide, technologists and ethicists alike are scrambling to find ways to combat AI-generated deepfakes. The White House says it is currently exploring ways to use encryption technology to designate authentic content.
“We know that increasingly powerful technologies are making it easier to do things like clone voices or fake videos,” said Ben Buchanan, the White House special adviser for AI. business insider Early this month. “We want to manage some of the risk while not hindering the creativity that can come with more powerful tools.”
Buchanan said the Biden administration met with more than a dozen AI developers in 2023 and agreed to allow companies to place watermarks on their products before releasing them to the public. However, the opposite approach is attractive because the watermark can be manipulated and removed completely.
“(On) the government side, we are in the process of developing watermarking standards through the new AI Safety Institute and the Department of Commerce. This ensures we have clear road rules and guidelines on what to do. We need to address some of these tricky watermarking and content and provenance issues,” Buchanan said. yahoo finance.
A White House spokeswoman did not immediately respond to a request for comment. decryption.
Last December, in an initiative to combat AI-generated images, digital imaging giants Nikon, Sony, and Canon announced a partnership to include digital signatures on images taken with their respective cameras.
Last week, the Biden administration announced the launch of a U.S. AI safety research institute involving OpenAI, Microsoft, Google, Apple, and Amazon. The administration said the institute was created in response to Biden’s executive order for the AI industry last October.
Developing AI to detect AI deepfakes has become a cottage industry. However, some argue against this practice, saying it only makes the problem worse.
“Using AI to detect AI is ineffective. Instead, it creates an endless arms race. “The bad guys will always win,” said Rod Boothby, co-founder and CEO of identity verification company IDPartner. decryption. “The solution is to flip the problem around and allow people to prove their identity online. Using their bank identity is a clear solution to the problem.”
Boothby points to the banking sector’s use of “persistent authentication” to ensure that people in anonymous Internet sessions are who they say they are.
For cybersecurity and legal scholar Star Kashman, protecting yourself from deepfakes comes back to awareness.
“Especially when it comes to robocalls and AI-generated phone scams, raising awareness can prevent a lot of damage,” Kashman said. decryption By email. “Once families become aware of common AI voice call scams, where a scammer calls pretending to have kidnapped a family member and uses AI to imitate the family member’s voice, the individual receiving the call must verify the relative before paying. You can tell. “This is a fake ransom for someone who has never gone missing.”
Advances in generative AI have made it increasingly easier to deceive and fool the general public. This was demonstrated recently by a robocall campaign in New Hampshire aimed at preventing Democratic voters from participating in last month’s primary election.
The Biden robocalls were traced to a Texas-based telecommunications company. After issuing a cease-and-desist order to Lingo Telecom LLC., the Federal Communications Commission passed new rules making robocalls using AI voices illegal.
Kashman said awareness is the best way to avoid fraud by AI deepfakes, but acknowledged that the threat requires government intervention.
“In the case of deepfakes, knowledge does not prevent individuals from creating these fakes,” Kashman said. “But knowledge could put more pressure on the government to pass legislation making illegal, non-consensual creation of deepfakes illegal at the federal level.”