ChatGPT maker introduces GPT-4o.
“/>ChatGPT maker introduces GPT-4o.
The leadership shake-up at the world’s most influential startup, OpenAI, has once again brought to the fore the debate between AI accelerationists, who advocate rapid AI deployment to solve major global challenges, and those who advise assessment and mitigation of potential risks before building machines that are smarter than humans.
Just days after the ChatGPT maker announced its most advanced AI model GPT-4o, its superalignment co-lead Jan Leike and cofounder and former chief scientist Ilya Sutskever have announced exists from the company.
Why did the leaders quit?
In AI science, superalignment essentially means making sure that advanced AGI (artificial general intelligence) projects are developed with the same goals as those of the people who use them.
Leike, who was leading OpenAI’s AI safety team, said, “I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
He categorically called out that “safety culture and processes have taken a backseat (at OpenAI) to shiny products”, adding that his team was starved of computing capacity for a very long time.
“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” Leike said in a post on X.
What is OpenAI’s reaction to the exits?
“There’s no proven playbook for how to navigate the path to AGI,” OpenAI CEO Sam Altman and cofounder Greg Brockman wrote in a joint statement on X even as the company graciously acknowledged the work done by Leike and Sutskever.
“We think that empirical understanding can help inform the way forward. We believe both in delivering on the tremendous upside and working to mitigate the serious risks,” they said.
OpenAI, they said, believes as AI models become more capable, they will “take actions on their (humans’) behalf, rather than talking to a single model with just text inputs and outputs”.
“We think such systems will be incredibly beneficial and helpful to people, and it will be possible to deliver them safely, but it’s going to take an enormous amount of foundational work,” they added.
What happened previously at OpenAI?
The difference in opinion among OpenAI’s leaders has not surfaced for the first time. Back in November 2023, Sutskever, along with other board members, had ousted Altman in one of the most shocking coups at a tech firm.
A staff of 700 people and investors of the company aggressively rebelled against the move and, realizing the damage, the board rehired Altman as the CEO within 48 hours.
Sutskever publicly apologized for his involvement in voting out Altman. However, his board position was not renewed, and he continued in his role as president before calling it quits last week.
How did the world react?
Soon after the news broke out, X (formerly Twitter) was flooded with memes, deepfake videos, and parody accounts of Altman trolling the situation.
One account wrote, who needs a safety team when AI systems will become sentient and benevolent themselves.
Whereas some serious AI commentators felt that it is unfair to burden OpenAI to shoulder the burden on humanity’s behalf. “Open source and sunlight will do much better for responsible AI than secretive opaque entities,” the user wrote.
The debate between AI accelerationists and AI ethicists is likely to play out at the global AI Summit being hosted by South Korea and Britain in Seoul starting on Tuesday.
- Published On May 21, 2024 at 10:34 AM IST
Join the community of 2M+ industry professionals
Subscribe to our newsletter to get latest insights & analysis.
Download ETCIO App
- Get Realtime updates
- Save your favourite articles
Scan to download App