A dispute between ChatGPTâs parent company, OpenAI, and one of the companyâs founders â billionaire and tech entrepreneur Elon Musk â will play out in a federal court in Oakland, California, beginning April 27.Â
Mr. Musk, who left the company in 2018, is suing OpenAI, claiming its leaders manipulated him into thinking he was contributing money to a nonprofit. He wants the company returned to its nonprofit status and seeks monetary compensation.Â
OpenAI says Mr. Musk, who has since raised billions through the launch of his own for-profit company xAI, is misrepresenting facts to gain a competitive edge.Â
Why We Wrote This
Elon Musk claims that he was misled by OpenAI, which he co-founded. A centerpiece of the trial is likely to be what role a company should have in ensuring responsible development of artificial intelligence.
At the heart of the case is a dispute about the direction of artificial intelligence and how much responsibility technology companies bear for the public good.Â
David Tuffley, lecturer at Griffith Universityâs School of Information and Communication Technology in Australia, calls the trial a âtest caseâ for AI ethics.
âI think this current lawsuit is going to be a very interesting step in the direction of clarifying just how responsible a corporation is,â Mr. Tuffley says.Â
What are the roots of Mr. Muskâs dispute with OpenAI?
Mr. Musk says now-CEO Sam Altman approached him in 2015 and asked him to help start a nonprofit AI company that would be âfor the benefit of humanity.â Mr. Musk says he believed, for example, that the company would distribute its research openly and focus on safety, not just profits. He says he contributed the majority of the companyâs funding in its early years.
Now, OpenAI has expanded to become one of the worldâs most prominent AI companies. Its signature product, ChatGPT, has more than 700 million weekly users, according to the company.
In 2025, OpenAI finalized its transition to a for-profit model.Â
After leaving OpenAI, Mr. Musk started xAI. In 2024, he sued OpenAI, Mr. Altman, and OpenAIâs president, Greg Brockman, for up to $134 billion in damages. He says he would donate any compensation he wins to OpenAIâs nonprofit arm. Mr. Musk is also seeking to have Mr. Altman and Mr. Brockman removed as officers in the company, and has asked the court to revert OpenAI to a nonprofit.
In court filings, the company claims Mr. Musk was aware of and open to the companyâs plans to switch to a for-profit entity, saying he left when the company refused to give him full control.Â
âThis case has always been about Elon generating more power and more money for what he wants,â the company said in a social media post April 7.Â
OpenAI CEO Sam Altman recently published a blueprint on how he says elected officials could create policy to mitigate potential harms from artificial intelligence.
Whatâs at stake?
Mr. Musk says his lawsuitâs purpose is to compel OpenAI to return to its founding principles, which he says it has violated by prioritizing profit over safety.
âOpenAIâs conduct could have seismic implications for Silicon Valley and, if allowed to stand, could represent a paradigm shift for technology start-ups,â wrote Mr. Musk in court filings.Â
He has come under scrutiny for his own AI product, Grok, a chatbot on the social media platform X that users have accused of generating harmful sexualized images and videos.
Anton Leicht, a visiting scholar at the Carnegie Endowment for International Peace who researches the political economy of AI, says itâs true that OpenAI has moved away from parts of its founding mission.Â
However, he thinks the way the company has narrowed its scope is realistic given the way AI investment has taken off since OpenAIâs founding.Â
âI think [the trial] reveals the tension between trying to push AI capabilities and doing all these other things,â including altruism, he says.Â
Jeffrey Saviano, an AI ethicist who advises corporate boards and government officials, worries that the debate around OpenAIâs for-profit versus nonprofit status is missing the point. He hopes the trial raises bigger questions about AI and responsibility: Why have so few companies articulated the boundaries of what they will tolerate from their AI systems? How should AI deployers, and not just developers, be held accountable?
âThis is the leadership moment of our time,â he says. âItâs hard to imagine being an effective leader â for-profit or nonprofit â today unless you have an appreciation and you understand what responsible AI development and deployment looks like.â
How has OpenAI come under scrutiny recently?Â
Mr. Musk isnât the first to claim that OpenAI is shirking its ethical responsibilities. The company faces multiple other lawsuits â The New York Times accuses it of illegally using the newspaperâs articles to train ChatGPT, for example, and multiple lawsuits allege that ChatGPT gave harmful advice to people in mental health crises, including some that resulted in a person dying by suicide.Â
On April 9, Florida Attorney General James Uthmeier launched an investigation into OpenAI, saying, among other things, that ChatGPT may have been used to assist a gunman who fatally shot two people at Florida State University last year.
OpenAI says it is âdedicated to the safe and beneficial development of artificial general intelligence.â Last year, it made updates to ChatGPT it said aimed to address the platformâs interactions with people experiencing a mental health crisis.Â
In February, many members of the public, as well as industry insiders, sided with OpenAI competitor Anthropic when it refused to sign a contract with the Pentagon that it feared could open the door for unethical use of its AI technology. OpenAI stepped in to fill Anthropicâs place, though it had similar â but less binding â restrictions.Â
The Department of Defense then blacklisted Anthropic, though the company is challenging that in court.Â
Whereas OpenAI has moved away from policy advocacy that doesnât directly affect its business interests, âAnthropic is kind of still trying to do it all,â says Mr. Leicht. That âcomes with real costs.â
But OpenAIâs approach has come with its own costs. Its Pentagon contract ignited a massive backlash, with uninstalls of the ChatGPT app jumping 295% the day after the contract was announced. Mr. Altman also came under scrutiny in April when a New Yorker magazine investigation quoted multiple people questioning whether he could be trusted to lead development of this powerful technology.Â
Mr. Altman did not respond directly to the New Yorker article. But shortly after it ran, Mr. Altman published a sweeping blueprint for how policymakers could mitigate AIâs harms by establishing a social contract. The plan calls for, among other things, taxing businesses that replace human employees with robots, and creating a public investment fund to distribute the returns from AI profits to the public.
