Bitcoin

Vitalik Buterin highlights AI risks amid OpenAI leadership upheaval

Ethereum co-founder Vitalik Butertin has called his views on “superintelligence” artificial intelligence (AI) “dangerous” in response to OpenAI’s ongoing leadership changes.

On May 19, Cointelegraph reported that Jan Leike, OpenAI’s former head of coordination, had resigned after saying he had reached a “breaking point” with management over the company’s core priorities.

Leike argued that at OpenAI, “safety culture and processes have taken a backseat to shiny products,” and many have pointed to developments around artificial general intelligence (AGI).

AGI is expected to be a type of AI that is equivalent to or exceeds human cognitive abilities. Industry experts are already starting to worry, with industry experts saying they are ill-equipped to manage these superintelligent AI systems.

These sentiments seem consistent with Buterin’s views. In his post on

Source: Vitalik Buterin

Buterin emphasized an open model running on consumer hardware as a “hedge” against a future in which small, large corporations can read and mediate most human thoughts.

“These models have a much lower risk of ruin than corporate megalomania or the military.”

This is his second commentary on AI and its increasing capabilities in the past week.

He claimed on May 16 that OpenAI’s GPT-4 model has already surpassed the Turing test, which determines the ‘humanity’ of an AI model. He cited a new study claiming that most humans are indecipherable when talking to machines.

Related: Microsoft’s new ‘Black Mirror’ recall feature records everything you do

But Buterin is not the first to express such concerns. The UK government has also recently scrutinized the increased involvement of Big Tech in the AI ​​sector, raising issues related to competition and market power.

Groups like 6079 AI are already popping up across the internet, advocating for decentralized AI to become more democratized and less dominated by big tech.

source: 6079.ai

This follows the May 14 departure of co-founder and chief scientist Ilya Sutskever, another senior member of the OpenAI leadership team. announced his resignation.

Sutskever did not address concerns about AGI. However, in his post on X, he expressed confidence that OpenAI will develop “safe and beneficial” AGI.

magazine: ‘AI in each other’ to prevent AI apocalypse: Science fiction writer David Brin