Everything happens so much. Iâm in Seoul for the International AI summit, the half-year follow-up to last yearâs Bletchley Park AI safety summit (the full sequel will be in Paris this autumn). While you read this, the first day of events will have just wrapped up â though, in keeping with the reduced fuss this time round, that was merely a âvirtualâ leadersâ meeting.
When the date was set for this summit â alarmingly late in the day for, say, a journalist with two preschool children for whom four days away from home is a juggling act â it was clear that there would be a lot to cover. The hot AI summer is upon us:
The inaugural AI safety summit at Bletchley Park in the UK last year announced an international testing framework for AI models, after calls ⦠for a six-month pause in development of powerful systems.
There has been no pause. The Bletchley declaration, signed by UK, US, EU, China and others, hailed the âenormous global opportunitiesâ from AI but also warned of its potential for causing âcatastrophicâ harm. It also secured a commitment from big tech firms including OpenAI, Google and Mark Zuckerbergâs Meta to cooperate with governments on testing their models before they are released.
While the UK and US have established national AI safety institutes, the industryâs development of AI has continued ⦠OpenAI released GPT-4o (the o stands for âomniâ) for free online; a day later, Google previewed a new AI assistant called Project Astra, as well as updates to its Gemini model. Last month, Meta released new versions of its own AI model, Llama ⦠And in March, the AI startup Anthropic, formed by former OpenAI staff who disagreed with Altmanâs approach, updated its Claude model.
Then, the weekend before the summit kicked off, everything kicked off at OpenAI as well. Most eye-catchingly, perhaps, the company found itself in a row with Scarlett Johansson over one of the voice options available in the new iteration of ChatGPT. Having approached the actor to lend her voice to its new assistant, an offer she declined twice, OpenAI launched ChatGPT-4o with âSkyâ talking through its new capabilities. The similarity to Johansson was immediately obvious to all, even before CEO Sam Altman tweeted âherâ after the presentation (the name of the Spike Jonze film in which Johansson voiced a super-intelligent AI). Despite denying the similarity, the Sky voice option has been removed.
More importantly though, the two men leading the company/nonprofit/secret villainous organisationâs âsuperalignmentâ team â which was devoted to ensuring that its efforts to build a superintelligence donât end humanity â quit. First to go was Ilya Sutskever, the co-founder of the organisation and leader of the boardroom coup which, temporarily and ineffectually, ousted Altman. His exit raised eyebrows, but it was hardly unforeseen. You come at the king, you best not miss. Then, on Friday, Jan Leike, Sutskeverâs co-lead of superalignment also left, and had a lot more to say:
A former senior employee at OpenAI has said the company behind ChatGPT is prioritising âshiny productsâ over safety, revealing that he quit after a disagreement over key aims reached âbreaking pointâ.
Leike detailed the reasons for his departure in a thread on X posted on Friday, in which he said safety culture had become a lower priority. âOver the past years, safety culture and processes have taken a backseat to shiny products,â he wrote.
âThese problems are quite hard to get right, and I am concerned we arenât on a trajectory to get there,â he wrote, adding that it was getting âharder and harderâ for his team to do its research.
âBuilding smarter-than-human machines is an inherently dangerous endeavour. OpenAI is shouldering an enormous responsibility on behalf of all of humanity,â Leike wrote, adding that OpenAI âmust become a safety-first AGI [artificial general intelligence] companyâ.
Leikeâs resignation note was a rare insight into dissent at the group, which has previously been portrayed as almost single-minded in its pursuit of its â which sometimes means Sam Altmanâs â goals. When the charismatic chief executive was fired, it was reported that almost all staff had accepted offers from Microsoft to follow him to a new AI lab set up under the House of Gates, which also has the largest external stake in OpenAIâs corporate subsidiary. Even when a number of staff quit to form Anthropic, a rival AI company that distinguishes itself by talking up how much it focuses on safety, the amount of shit-talking was kept to a minimum.
It turns out (surprise!) thatâs not because everyone loves each other and has nothing bad to say. From Kelsey Piper at Vox:
I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI âdue to losing confidence that it would behave responsibly around the time of AGIâ, has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.
Barely a day later, Altman said the clawback provisions âshould never have been something we had in any documentsâ. He added: âwe have never clawed back anyoneâs vested equity, nor will we do that if people do not sign a separation agreement. this is on me and one of the few times Iâve been genuinely embarrassed running openai; i did not know this was happening and i should have.â (Capitalisation modelâs own.)
Altman didnât address the wider allegations, of a strict and broad NDA; and, while he promised to fix the clawback provision, nothing was said about the other incentives, carrot and stick, offered to employees to sign the exit paperwork.
As set-dressing goes, itâs perfect. Altman has been a significant proponent of state and interstate regulation of AI. Now we see why it might be necessary. If OpenAI, one of the biggest and best-resourced AI labs in the world, which claims that safety is at the root of everything it does, canât even keep its own team together, then what hope is there for the rest of the industry?
skip past newsletter promotion
Alex Hern’s weekly dive in to how technology is shaping our lives
Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion
Sloppy
The âShrimp Jesusâ is an example of the outlandish AI-generated art being shared on Facebook
Itâs fun to watch a term of art developing in front of your eyes. Post had junk mail; email had spam; the AI world has slop:
âSlopâ is what you get when you shove artificial intelligence-generated material up on the web for anyone to view.
Unlike a chatbot, the slop isnât interactive, and is rarely intended to actually answer readersâ questions or serve their needs.
But like spam, its overall effect is negative: the lost time and effort of users who now have to wade through slop to find the content theyâre actually seeking far outweighs the profit to the slop creator.
Iâm keen to help popularise the term, for much the same reasons as Simon Willison, the developer who brought its emergence to my attention: itâs crucial to have easy ways to talk about AI done badly, to preserve the ability to acknowledge that AI can be done well.
The existence of spam implies emails that you want to receive; the existence of slop entails AI content that is desired. For me, thatâs content Iâve generated myself, or at least that Iâm expecting to be AI-generated. No one cares about the dream you had last night, and no one cares about the response you got from ChatGPT. Keep it to yourself.
Donât get TechScape delivered to your inbox? Sign up for the full article here