Ethereum

OpenAI, Meta disclose that AI tools have been used for political disinformation

OpenAI and Meta this week revealed details about an ongoing nefarious campaign by actors linked to China, Israel, Russia, and Iran. They were determined to have used their services to spread disinformation and disrupt politics in the United States and other countries.

In its latest quarterly threat report published on Wednesday, Meta emphasized that generative AI can still be easily detected in such campaigns.

“To date, we have not seen any new GenAI-led tactics that could disrupt our ability to disrupt the adversarial networks behind them,” the social media giant said.

While AI-generated photos are widely used, political deepfakes, which many experts say pose a major threat globally, are less common, Meta added. “We have not currently seen any widespread use of photorealistic AI-generated political media by threat actors,” the report notes.

OpenAI said it builds defenses on its own AI models, works with partners to share threat intelligence, and leverages its own AI technology to detect and disrupt malicious activity.

“Our model is designed to create friction for threat actors,” the company reported yesterday. “We built it with defense in mind.”

Noting that its content protection measures were successful as its model refused to generate some requests, OpenAI said it had banned accounts associated with the identified campaigns and shared relevant details with industry partners and law enforcement to facilitate further investigation. Yes.

OpenAI described covert influence operations as “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.” The latest disclosure is described as part of OpenAI’s transparency efforts.

The company used the information collected from these campaigns to dig deeper to assess how impactful the disinformation operations were and categorize their techniques to improve future countermeasures. OpenAI said none of the identified actors scored higher than 2 on a scale of 1 to 6, with the highest score indicating a campaign that reached real audiences across multiple platforms.

According to OpenAI, at least five separate campaigns have used the model to generate text content that was then distributed to social media platforms such as Telegram, Twitter, Instagram, Facebook, online forums, and other websites. Meanwhile, Meta reported on AI-generated content, with other groups labeling it as “coordinated inauthentic behavior.”

Below are some of the specific campaigns the two companies are running:

Russian Threat

One Russian campaign, called “Bad Grammar,” leveraged OpenAI’s systems to generate comments in multiple languages ​​posted on Telegram, targeting audiences in Russia, Ukraine, the United States, Moldova, and the Baltic states. Comments covered topics such as Russia’s invasion of Ukraine, politics, and current events.

“The network primarily commented on posts from a small number of Telegram channels,” OpenAI says. “The most mentioned was the pro-Russian channel @Slavyangrad, followed by the English @police_frequency and @SGTNewsNetwork.”

Another ongoing Russian operation, called “Doppelganger,” used ChatGPT to generate an overwhelming number of website articles, social media posts, and comments that portrayed Russia positively while disparaging Ukraine, the United States, and NATO. This content is created to increase engagement and productivity across platforms like 9GAG.

Doppelganger also said it attempted to use OpenAI tools to create artificial images with captions critical of Western governments, but the system rejected requests that appeared to be disinformation propaganda.

Meta also mentioned this group in its Hostile Threat Report, which focused on attempts to infiltrate Meta’s social media platforms through a variety of topics. Meta noted that the problem is that groups frequently change their strategies and evolve over time.

Israeli misinformation

A private Israeli company called STOIC has started work on OpenAI called “Zero Zeno” that leverages AI models to generate comments. Zero Zeno incorporated this statement into broader disinformation tactics targeting Europe and North America.

“Zero Zeno posted short texts on Instagram and X about certain topics, particularly the Gaza conflict. These texts were generated using our model,” OpenAI said. “An additional set of accounts on those platforms will respond to comments generated by this action.”

“Open-source research in February described this network criticizing UN aid agencies in Palestine,” the report said, linking to the broader report.

OpenAI technology has also been utilized by Zero Zeno to generate fake bios and contribute to fake engagements. OpenAI also said the Israeli company used its technology to target “Israel’s Histadrut trade union organization and Indian elections.”

This group was also represented in Meta.

“The network’s accounts posed as locals from the countries they targeted, including Jewish students, African-Americans, and ‘concerned’ citizens,” Mehta said. They posted primarily in English about the Israel-Hamas war, hostages; praise for Israel’s military actions; Criticism of campus anti-Semitism, the United Nations Relief and Works Agency (UNRWA), and Muslims who argue that ‘radical Islam’ poses a threat to Canadian liberal values.”

Meta said it had banned the group and sent a cease-and-desist letter to STOIC.

China’s “spamuflage” efforts

China’s “Spamouflage” campaign spread the narrative under the guise of productivity software development, using OpenAI’s language models for tasks such as debugging code and generating annotations in various languages.

“Spamouflage posted short comments criticizing Chinese dissident Cai Xia(in) about “All comments in ‘Conversations’ are artificially generated using our model, which likely gives the false impression that real people have engaged with the content.”

However, in the case of the anti-Ukraine campaign, comments and posts generated by OpenAI on 9GAG appear to have provoked extremely negative reactions and criticism from users, who denounced the activity as fake and false.

Another China-related AI misinformation campaign has been detected by Meta. “They posted primarily in English and Hindi about news and current affairs, including images manipulated with photo editing tools or generated by artificial intelligence.”

For example, network users posted negative comments about the Indian government and addressed similar topics such as the Sikh community, the Khalistan movement, and the assassination of Hardeep Singh Nijja.

Iran Operation

A long-standing Iranian operation known as the International Union for Virtual Media (IUVM) has been identified as abusing OpenAI’s text generation capabilities to create multilingual posts and images supporting pro-Iran, anti-US, and anti-Israel views and narratives.

“This campaign targets a global audience and focuses on creating content in English and French,” OpenAI said. “We used our model to generate and proofread articles, headlines, and website tags.” The content is then posted and promoted on pro-Iran websites and social media as part of a broader disinformation campaign.

Neither Meta nor OpenAI responded to requests for comment. decryption.

editor Ryan Ozawa and Andrew Hayward

generally intelligent newsletter

A weekly AI journey explained by Gen, a generative AI model.

Related Articles

Back to top button