Ethereum

OpenAI Board Defends CEO Sam Altman Amid ‘Toxic Culture’ Claims

Just days after OpenAI announced the formation of a new safety committee, former board members Helen Toner and Tasha McCauley said CEO Sam Altman was prioritizing profits over responsible AI development, hiding key developments from the board and creating a toxic environment at the company. publicly criticized.

But current OpenAI board members Bret Taylor and Larry Summers today issued a strong defense of Altman, refuting the accusations and saying they are seeking to reopen the case that Toner and McCauley closed. This debate is Economist.

OpenAI’s board of directors was the first to fire former board members, claiming the CEO was unfit.

“Last November, in an effort to salvage this self-regulatory structure, OpenAI’s board of directors removed its CEO.” Toner and McCauley, who played a role in Altman’s ouster last year, wrote May 26. “In the specific case of OpenAI, we support the Board’s action because of its obligation to provide independent oversight and protect the company’s public interest mission.”

In a published response, Bret Taylor and Larry Summers, who joined OpenAI after Toner and McCauley left the company, defended Altman, dismissing the claims and asserting his commitment to safety and governance.

“We do not accept the claims made by Mr. Toner and Mr. McCauley regarding events at OpenAI,” they wrote. “We regret that rather than moving forward, Mr. Toner continues to revisit issues that were thoroughly examined through the WilmerHale-led review.”

Toner and McCauley did not cite the company’s new safety and security committee, but they did echo concerns that OpenAI would not be able to reliably monitor itself and its CEO.

“Based on our experience, we believe that self-government cannot reliably withstand the pressures of profit incentives,” they wrote. “We also feel the situation has developed since his return to the company, including his return to the board and the departure of senior safety. -Focused talent – ​​doesn’t bode well for OpenAI’s experiment with self-governance.”

Former board members said Altman’s “long-standing pattern of behavior” left the company’s board unable to properly oversee “key decisions and internal safety protocols.” But Altman’s current colleagues pointed to the conclusions of an independent review of the conflict commissioned by the company.

“The review rejected the idea that Mr. Altman’s replacement was necessary due to AI safety concerns of any kind. In fact, WilmerHale found that the previous board’s decisions were not motivated by concerns about product safety, security or speed. Development, OpenAI’s finances, or statements about its investors, customers or business partners.”

Perhaps more problematically, Toner and McCauley accused Altman of fostering a toxic corporate culture.

“Several senior leaders privately shared their serious concerns with the board, saying they believed Mr. Altman had cultivated a ‘toxic culture of lies’ and ‘his behavior could be characterized as psychological abuse.’”

But Taylor and Summers refuted their claims, saying Altman was highly regarded by his employees.

“After six months of almost daily contact with the company, we have found Mr. Altman to be very proactive and to continually engage with his management team on all relevant matters,” they said.

Taylor and Summers also said Altman is committed to working with governments to mitigate the risks of AI development.

The public debate comes amid a turbulent era for OpenAI that began with his brief departure. Just this month, a former Alignment executive joined rival company Antropic after making similar accusations against Altman. After he failed to get consent from actress Scarlett Johansson, he was forced to withdraw a voice model that was strikingly similar to hers. It was revealed that the company disbanded its Superalign team and abused NDAs to prevent former employees from criticizing the company.

OpenAI has also signed a contract with the Department of Defense to use GPT technology for military applications. Meanwhile, Microsoft, a major OpenAI investor, has reportedly taken similar action with regard to ChatGPT.

The claims shared by Toner and McCauley appear to be consistent with statements shared by former OpenAI researchers who have left the company. “Over the past few years, (OpenAI’s) safety culture and processes have taken a backseat to the shiny product,” said Jan Leike. The team was “sailing against the wind.”

Taylor and Summers partially addressed these concerns in their column, citing the new Safety Committee and its responsibility to “make recommendations to the full Board of Directors on matters related to important security and safety decisions for all OpenAI projects.”

Toner recently expanded his claims about Altman’s lack of transparency.

“If you understand what I’m saying, when ChatGPT launched in November 2022, there was no advance notice to the board,” she said on The TED AI Show podcast earlier this week. “We learned about ChatGPT from Twitter.”

She also said the OpenAI board was unaware that Altman owned the OpenAI Startup Fund, despite his claims that he lacked a financial stake in OpenAI. The fund invested millions of dollars raised from partners such as Microsoft into other businesses without the board’s knowledge. Altman’s ownership of the fund ended in April.

OpenAI did not respond to a request for comment. decoding.

Edited by Ryan Ozawa.

generally intelligent newsletter

A weekly AI journey explained by Gen, a generative AI model.

Related Articles

Back to top button