Ethereum

Claude AI Chatbot declared off-limits to political candidates

If Joe Biden wants a smart, friendly AI chatbot to answer his questions, his campaign team won’t be able to use Anthropic’s ChatGPT competitor Claude, the company announced today.

“We do not allow candidates to use Claude to build chatbots that can pretend to be themselves, and we do not allow anyone to use Claude for targeted political campaigns,” the company said in a statement. Violations of this policy will result in a warning and ultimately suspension of your access to Anthropic services.

Anthropic’s public statement of its “election misuse” policy comes as the potential for AI to generate large quantities of false and misleading information, images and videos is raising alarms around the world.

Meta implemented rules limiting the use of AI tools in politics last fall, and OpenAI has a similar policy.

Anthropic said its political protection falls into three main categories: developing and enforcing policies related to election issues, evaluating and testing models against potential misuse, and guiding users to accurate voting information.

Anthropic’s Acceptable Use Policy, which all users ostensibly agree to before accessing Claude, prohibits the use of AI tools for political campaigning and lobbying. The company said there will be warnings and service suspensions for violators and a human review process will be implemented.

The company also performs rigorous “red teaming” on its systems. That is, known partners are working together aggressively and systematically in an attempt to “break out” or use Claude for nefarious purposes.

“We test how our systems respond to prompts that violate our acceptable use policies, such as prompts requesting information about voter suppression tactics,” Anthropic explains. The company also said it had developed a series of tests to ensure “political parity,” comparative representation across candidates and subjects.

In the US, Anthropic has partnered with TurboVote to provide voters with trustworthy information instead of using generative AI tools.

“When a U.S.-based user requests voting information, a pop-up gives the user the option to redirect to TurboVote, a resource from the nonpartisan organization Democracy Works,” Anthropic explained. “Over the coming weeks”—we plan to add similar measures in other countries next.

like decryption As previously reported, OpenAI, the company behind ChatGPT, is taking similar action and redirecting users to the non-partisan website CanIVote.org.

Anthropic’s efforts are consistent with a broader movement within the technology industry to address the challenges AI poses to democratic processes. For example, the Federal Communications Commission recently outlawed the use of AI-generated deepfake voices in robocalls, a decision that highlights the urgency of regulating the application of AI in politics.

Like Facebook, Microsoft announced initiatives to combat misleading AI-generated political ads, introducing “content credentials as a service” and launching an election communications hub.

For candidates creating their own version of AI, OpenAI already had to address specific use cases. The company suspended the developer’s account after discovering he had created a bot that imitated presidential candidate Rep. Dean Phillips. The move comes after the nonprofit group Public Citizen filed a petition addressing the misuse of AI in political campaigns and asked regulators to ban generative AI in political campaigns.

Anthropic declined to comment further and OpenAI did not respond to inquiries. decryption.

Stay up to date with cryptocurrency news and receive daily updates in your inbox.

Related Articles

Back to top button