Ethereum

Former OpenAI safety researcher says ‘security was not a priority’

Former OpenAI safety researcher Leopold Aschenbrenner said the company’s security practices were “grossly inadequate.” In a video interview with Dwarkesh Patel posted Tuesday, Aschenbrenner spoke of internal conflicts over priorities and suggested focusing on the rapid growth and deployment of AI models at the expense of safety.

He also said he was fired for putting his concerns in writing.

In a wide-ranging four-hour conversation, Aschenbrenner told Patel that he had written an internal memo detailing his concerns last year and had distributed it to reputable experts outside the company. But a few weeks later, after a major security breach, he said he decided to share the updated memo with a few board members. He was quickly released from OpenAI.

“The context that might be helpful is the kinds of questions they asked me when they fired me. The questions were my views on AI going forward, AGI, the appropriate level of security for AGI, and whether the government should get involved. AGI, me and Sec. whether the alignment team was loyal to the company and what I did during the OpenAI board event,” Aschenbrenner said.

Artificial general intelligence (AGI) means that AI meets or exceeds human intelligence in all areas, regardless of how it is trained.

Following his brief dismissal, loyalty to the company or Sam Altman emerged as a key factor. More than 90% of employees signed a letter threatening to resign in solidarity with him. They also popularized the slogan “OpenAI is nothing without people.”

“Despite being pressured to sign the staff letter during a board meeting, I did not sign it,” Aschenbrenner recalled.

The Hyperaligned team, led by Ilya Sutskever and Jan Leike, was responsible for building long-term safety practices to ensure AI lives up to human expectations. The departure of prominent members of that team, including Sutskever and Leike, led to further scrutiny. The entire team was subsequently disbanded, and a new safety team was announced, led by CEO Sam Altman, who is also a member of the OpenAI board of directors to which it reports.

Aschenbrenner said OpenAI’s actions contradicted its public statements about safety.

“Another example is when I raised the issue of security. They said security is our top priority,” he said. “Whenever it comes time to invest serious resources or make trade-offs to take basic steps, security has never been a priority.”

This is consistent with Leike’s statement that the team is “sailing against the wind” and that under Altman’s leadership, “safety culture and processes have taken a backseat to shiny products.”

Aschenbrenner also expressed concern about the development of AGI, emphasizing the importance of a cautious approach. In particular, many people are concerned that China is working hard to surpass the United States in AGI research.

“China will go all out to infiltrate American AI labs with billions of dollars and thousands of people,” he said. “Not only will it be a cool product, but it will also help liberal democracy survive.”

Just a few weeks ago, it was revealed that OpenAI required employees to sign abusive non-disclosure agreements (NDAs) that prevented them from disclosing anything about the company’s safety practices.

Aschenbrenner said he had not signed such an NDA but was offered about $1 million in equity.

In response to these growing concerns, a group of about a dozen current and former OpenAI employees signed an open letter demanding the right to report misconduct by the company without fear of retaliation.

The letter, supported by industry figures such as Joshua Bengio, Geoffrey Hinton and Stuart Russell, highlights the need for AI companies to commit to transparency and accountability.

“Unless there is effective government oversight of these companies, current and former employees are one of the few who can be held accountable to the public. However, extensive confidentiality agreements prevent us from voicing our concerns except to the very companies that do. It fails to address these issues,” the letter reads. “General whistleblower protections are not sufficient because they focus on illegal activity, while many of the risks we are concerned about are not yet regulated.

“Given that there have been instances like this across the industry, it is reasonable for some of us to fear various forms of retaliation,” it continues. “This is not the first time we have encountered or talked about these issues.”

As news of the restrictive employment provisions spread, Sam Altman claimed he was unaware of the situation and assured the public that his legal team was working to resolve the matter.

“The previous closing document had provisions for potential equity cancellation. Even though we didn’t get anything back, there shouldn’t have been any of that in any documents or communications,” he tweeted. “This is up to me and one of the few times I’ve been truly embarrassed to run OpenAI. “I didn’t know this was happening and I shouldn’t have.”

OpenAI later said it had released all employees from the controversial non-disparagement agreement and removed the clause from its departure documents.

OpenAI did not respond to a request for comment. Decrypt.

generally intelligent newsletter

A weekly AI journey explained by Gen, a generative AI model.

Related Articles

Back to top button