French President Emmanuel Macron on left. OpenAI CEO Sam Altman on right.
OpenAI’s CEO met with French President Emmanuel Macron last month.
Scroll through Sam Altman’s Twitter page, and you’ll see a feed filled with photos of the OpenAI CEO posing with world leaders.
Altman has exchanged words with Indian prime minister Narendra Modi and sat down with South Korean president Yoon Suk Yeol. He’s traveled to Israel, Jordan, Qatar, and United Arab Emirates. And that’s all just this week.
Read more
These meetings come on the heels of Altman’s European world tour last month, during which he met with French president Emmanuel Macron and EU president Ursula von der Leyen.
Why all this schmoozing with world leaders? For one, the breadth of Altman’s world tour illustrates that he is determined to shape the debate on regulating AI following the release of OpenAI’s ChatGPT late last year. There’s also a real need to educate national leaders and lawmakers about AI. Altman, who holds the position of running a leading artificial intelligence company—one that’s helping usher in a new era in AI—is just the person to handle that task.
Of course, that’s not to say he’s the only CEO holding AI-related meetings with lawmakers. Sundar Pichai, Google’s CEO, met with EU regulators in Brussels in May to discuss the technology. Meanwhile, Anthropic CEO Dario Amodei met with US president Joe Biden in May to discuss the potential dangers of AI. Clearly, AI CEO want to act.
It’s a reversal of a previous era of Big Tech during which tech CEOs—such as Pichai and Meta’s Mark Zuckerberg—had tended to remain on the sidelines rather than proactively engage with regulators.
The EU is at the forefront of AI regulation
During his meetings with regulators in Europe, Altman threatened to cease OpenAI’s operations there in response to how lawmakers are handling AI regulation. He later backtracked.
Story continues
The EU’s legislation would be the first in the world to regulate the use of AI. The proposed AI Act would classify AI systems into categories of various levels of risk. High-risk candidates, which include recruitment tools and medical devices, would face compliance such as data requirements. Meanwhile, those classified under “unacceptable risks,” such as social scoring—or risk profiles of individuals based on surveillance—would be prohibited. Even those considered to pose “minimal or no risk” must notify humans that they are interacting with an AI system unless it is evident, and labels must be applied to deepfakes.
The US, meanwhile, is largely relying on the tech industry to come up with its own safeguards, though regulators have acknowledged they might need to get involved in some cases.
It’s a fine balance: Regulate the technology too narrowly and you might miss out on certain harms; regulate it too broadly and you could stifle innovation, as Johann Laux, who studies legal implications of AI at the Oxford Internet Institute, said to Euronews. It’s a debate that’s likely to grow louder, as AI’s capabilities and influence grow. In the meantime, Altman is determined to shape regulation before it shapes him.
More from Quartz
Sign up for Quartz’s Newsletter. For the latest news, Facebook, Twitter and Instagram.
Click here to read the full article.