Ethereum

AI assistant users may develop ‘emotional attachments’ to them, Google warns.

Virtual personal assistants powered by artificial intelligence are becoming commonplace across technology platforms, with every major tech company adding AI to its services and dozens of professional services coming to market. Researchers at Google say that while they are very useful, they can cause humans to become too emotionally attached to themselves, leading to a number of negative social consequences.

A new research paper from Google’s DeepMind AI Lab highlights the potential benefits of advanced, personalized AI assistants to transform many aspects of society, saying they could “fundamentally change the nature of work, education, and creative pursuits, as well as the way we communicate, coordinate, and coordinate with one another.” We negotiate and ultimately influence who we want to be and who we want to be.”

Of course, if AI development continues to accelerate without thoughtful planning, these massive impacts could become a double-edged sword.

What is one major risk? Developing inappropriately close bonds – this can be further exacerbated when the assistant is presented with human-like expressions or faces. “These artificial agents may express platonic or romantic affection for users, laying the groundwork for users to form long-term emotional attachments to the AI,” the paper says.

If left unchecked, this attachment can lead to a loss of autonomy and social bonds for users as AI can replace human interaction.

This risk is not purely theoretical. Even when AI was in a somewhat primitive state, an AI chatbot was influential enough to cause a user to commit suicide after a long conversation in 2023. Her AI-based email assistant from eight years ago named ‘Amy Ingram’ tried to get some of her users to send her love notes and even tried to visit her at her workplace.

Iason Gabriel, a research scientist on the DeepMind ethics research team and co-author of the paper, did not respond. detoxification Request for comment.

But Garbriel warned in a tweet that “increasingly personal and human-like forms of assistants raise new questions about anthropomorphism, privacy, trust and the appropriate relationship with AI.”

Gabriel believes that more safeguards and a more holistic approach to this new social phenomenon are needed because “millions of AI assistants could be deployed at the level of society interacting with each other and non-users.”

The research paper also discusses the importance of value alignment, safety, and misuse in AI assistant development. Although AI assistants can improve users’ well-being, enhance their creativity and help them optimize their time, the authors point out that they have additional risks, such as inconsistency with user and societal interests, imposition of value on others, use for malicious purposes and vulnerabilities. Warned of danger. to enemy attacks.

To address these risks, the DeepMind team recommends developing a comprehensive evaluation of AI assistants and accelerating the development of socially beneficial AI assistants.

“We are currently at the beginning of an era of technological and social change, so we have a window of opportunity now as developers, researchers, policymakers and public stakeholders to act to shape the kind of AI assistants we want to see. Oh my god, I have it.”

AI misalignment can be mitigated through Reinforcement Learning Through Human Feedback (RLHF), which is used to train AI models. Experts like Paul Christiano, who ran the language model alignment team at OpenAI and now leads the nonprofit Alignment Research Center, warn that improper management of AI training methods could end in disaster.

Paul Christiano said on the Bankless podcast last year that “there is a 10 to 20 percent chance that AI will take over,” and that “many (or) most humans will die.” “I take it very seriously.”

Edited by Ryan Ozawa.

Stay up to date with cryptocurrency news and receive daily updates in your inbox.

Related Articles

Back to top button