Jan Leike, former head of OpenAI’s alignment and “superalignment” initiative, took to Twitter (aka X) on Friday to explain why he was leaving the AI developer on Tuesday. In a tweet thread, Leike pointed to a lack of resources and a focus on safety as reasons for his decision to resign from the ChatGPT manufacturer.
OpenAI’s Alignment or Superalignment team is responsible for safety and creating more human-centric AI models.
Leike’s departure marks the third high-profile departure from the OpenAI team since February. On Tuesday, OpenAI co-founder and former chief scientist Ilya Sutskever also announced he was leaving the company.
“Quitting this job was one of the hardest things I’ve ever done,” Reike wrote. “Because we urgently need to find ways to steer and control AI systems much smarter than we are.”
Yesterday was my last day as Alignment Director, Super Alignment Director, and Executive. @Open AI.
— Jan Leike (@janleike) May 17, 2024
Leike thought OpenAI was the best place for artificial intelligence research, but noted that it wasn’t always aligned with the company’s leadership.
“Building machines that are smarter than humans is an inherently risky endeavor,” Leike warned. “But over the past few years, safety culture and processes have taken a backseat to shiny products.”
Commenting on the risks of artificial general intelligence (AGI), Leike said that while OpenAI has a “tremendous responsibility,” the company is more focused on achieving AGI than on safety. For computing resources.
Artificial General Intelligence, also known as Singularity, refers to an AI model that can not only solve problems in a variety of areas like humans, but also has the ability to learn and solve problems on its own for which the program has not been trained.
On Monday, OpenAI unveiled several new updates to its flagship generative AI product, ChatGPT, including a faster and more intelligent GPT-4O model. According to Leike, OpenAI’s previous team is working on several projects related to more intelligent AI models.
Before working at OpenAI, Leike worked as an alignment researcher at Google DeepMind.
“It’s been a really tough journey over the past three years,” Leike wrote. “Using InstructGPT, our team launched the first (reinforcement learning with human feedback) LLM, published the first scalable supervision for an LLM, (and) pioneered automated interpretability and weak-strong generalization. More exciting stuff coming soon.”
According to Leike, serious conversations about what it means to achieve AGI are long overdue.
“We must prioritize preparing as best we can,” Leike continued. Only then can we ensure that AGI benefits all humanity.”
Leike didn’t mention any plans in the thread, but encouraged OpenAI to prepare for when AGI becomes a reality.
“Learn to feel AGI,” he said. “Act with a gravity appropriate to what you are creating. I believe I can ‘deliver’ the cultural change that is needed.”
“I trust you,” he concluded. “The world is counting on you.”
Reike did not react immediately. detoxification Request for comment.
Editor: Andrew Hayward
generally intelligent newsletter
A weekly AI journey explained by Gen, a generative AI model.