Hello and welcome to Eye on AI.
There’s been a lot of AI news in the past few days, which we’ll get to in the news section below.
But first: If you’re still struggling to wrap your head around today’s AI generative revolution, don’t worry. You’re not alone. Survey after survey shows that while everyone agrees AI is poised to have a massive impact on business, society, and even our own personal relationships, many executives are not confident that their own company has a well-defined strategy around how to use the technology to reinvent their business and find value. The technology is bewildering, producing results that can seem magical and brilliant one moment and inept and dangerously wrong the next. The legal, regulatory, and ethical issues surrounding the technology can seem like a minefield. (Just ask Google.) Equally concerning are the new data privacy and security risks generative AI introduces. And then there’s the cost—whether you’re purchasing tokens through an API with Google, OpenAI, or Anthropic, or trying to fine-tune an open-source model like Llama 2 on your own cloud-based GPUs, generative AI is expensive.
If you’re looking for answers, insights, and tips to help you decide how your own business should use generative AI to supercharge efficiency and bolster the bottom line, while at the same time sidestepping the significant regulatory, security, and ethical pitfalls, please join me at the Fortune Brainstorm AI Conference in London on April 15 and 16. Those who have attended our previous Brainstorm AI events in San Francisco know how good this event is—and we are so excited to be bringing the conversation to London.
In conjunction with our founding partner Accenture (and sponsor of Eye on AI) and partner Builder.ai, Fortune is convening top minds from companies across Europe, the U.S., and beyond to discuss how business can harness the power of AI, and how we can avoid its risks. The program features one-on-one conversations, panel discussions, and lively roundtable sessions, as well as plenty of time for networking.
I want to call out just a few highlights of the program: Google DeepMind vice president Zoubin Ghahramani and Faculty AI CEO Marc Warner will give us a sense of where the cutting edge of AI is heading and how it will likely transform society. Microsoft chief scientist Jaime Teevan and Accenture’s chief AI officer Lan Guan will discuss generative AI’s impact on the future of work. Shez Partovi, the chief innovation and strategy officer for Royal Philips, will talk to us about AI and health care. Paula Goldman, Salesforce’s chief ethical and humane use officer, and Builder.AI CEO Sachin Dev Duggal will walk us through the intellectual property issues generative AI raises and how to potentially address them. Darktrace CEO Poppy Gustafsson will address responsible AI on a panel that also includes Dame Joanna Shields OBE, the founder and CEO of Precognition.
There will be demos of the latest in synthetic media from hot generative AI startups Synthesia and ElevenLabs, whose CEOs will discuss how these technologies can remake corporate communication, even as they raise scary concerns about disinformation and fraud. Meanwhile, Josh Berger, CBE, the chairman of Battersea Entertainment, and Lynda Rooke, the president of performing arts and entertainment union Equity, will discuss the implications of generative AI for the creative industries.
We have speakers from some of the biggest companies on the Fortune 500 and Fortune 500 Europe, including Royal Dutch Shell, Maersk, Intel, and HSBC. There will also be executives on stage from Palantir and the London Stock Exchange. We’ll hear from Ian Hogarth, who is not only a serial entrepreneur and leading European startup investor but also chairman of the U.K. AI Safety Institute. Connor Leahy, founder and CEO of Conjecture AI and a former cofounder of open-source AI collective EleutherAI, will also talk about AI’s biggest risks and what can be done to head them off.
Helping me lead the conversations will be my fellow Brainstorm AI London cochairs May Habib, founder and CEO of generative AI platform Writer, and Eileen Burbidge, director at Fertifa and partner at Passion Capital, as well as Fortune’s Ellie Austin. Several other talented Fortune editors and journalists will be on hand too to help moderate the discussion and report on the event.
If you come to the conference you’ll get to chat with us and rub elbows with leading executives from the U.S. and Europe, as well as some of Europe’s best AI venture capital investors, startup founders, academics, and policymakers. If you are an Eye on AI subscriber, there’s a special discount to the conference available. You can apply to attend here or email BrainstormAI@fortune.com. Please register your interest today. And I’m looking forward to meeting many of you in London in a few weeks time!
With that, here’s the AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
AI IN THE NEWS
External review clears Altman to rejoin OpenAI expanded board. OpenAI’s nonprofit board said an external review of the events surrounding Sam Altman’s firing on November 17 last year cleared the AI company’s CEO of any wrongdoing that would have mandated his removal. The OpenAI board was forced to reverse its decision to oust Altman after most of OpenAI’s employees threatened to quit if Altman were not rehired. But as part of a compromise that saw several board members instrumental in Altman’s removal resign, Altman was not allowed to rejoin the board—which ultimately controls the for-profit arm of the company—until an outside law firm conducted an inquiry into his firing. WilmerHale, the law firm that conducted the inquiry, said that trust between Altman and the old board had broken down but that the board’s decision did not result from concerns about product safety, the pace of AI development, or OpenAI’s finances, Bloomberg reported.
Altman said in a press conference that he was glad to put the episode behind him. But two members of the previous board involved in Altman’s firing, Helen Toner and Tasha McCauley, posted on X that they had told WilmerHale’s investigators that “deception, manipulation, and resistance to thorough oversight should be unacceptable.” Meanwhile, Mira Murati, OpenAI’s chief technology officer, said she was dismayed by a story in the New York Times based on anonymous sources that portrayed her as a key instigator of Altman’s removal, having raised concerns about his leadership style with the old board. Murati said on X that the old board had asked her for feedback on Altman and that she had “fought their actions aggressively” once the board decided to fire Altman. Altman praised Murati for doing “an amazing job helping to lead the company” since November.
OpenAI announced it was adding several additional members to the nonprofit board: Sue Desmond-Hellman, a previous head of the Bill and Melinda Gates Foundation; Nicole Seligman, an ex-Sony executive; and Instacart CEO and former Meta executive Fidji Simo. The board said it would continue to add additional members as well.
NIST staffers revolt over appointment of Effective Altruist-affiliated researcher to key position at new AI Safety Institute. That’s according to a story in VentureBeat, that says staff members and scientists at the U.S. National Institute of Standards and Technology have threatened to resign over the decision to appoint Paul Christiano to a key position at the newly created U.S. AI Safety Institute. Christiano is one the leading researchers on AI safety and “aligning” AI models’ behavior with human values. He cofounded the Alignment Researcher Center, which has done independent testing and red-teaming of AI models, including OpenAI’s GPT-4. Previously, he worked as an AI safety researcher at OpenAI. But Christiano is also affiliated with Effective Altruism, the philosophical movement that has become increasingly focused on the risk that rogue AI might pose to humanity. And NIST staffers say his appointment might compromise the agency’s reputation for objectivity and integrity. Both AI ethics researchers, who tend to focus on AI harms that are here today—such as disinformation, bias, and discrimination—and those who are in favor of accelerating AI progress as rapidly as possible because they think AI’s benefits far outstrip any risks, have branded people like Christiano “doomers” and say they exaggerate AI’s existential dangers. According to VentureBeat, Secretary of Commerce Gina Raimondo, who supervises NIST and the new AI safety agency, made the decision to appoint Christiano to the role. President Biden’s November 2023 executive order on AI instructed the new AI Safety Institute to focus on certain risks, including those such as chemical and biological weapons production. Christiano has researched these issues previously.
U.S. government-commissioned report raises concerns of AI’s “existential risk.” The report, commissioned from a small consulting firm called Gladstone AI by the U.S. State Department, says that advanced AI poses “an urgent and growing risk to national security,” Time reports. The report said AI was as destabilizing to global order as the introduction of nuclear weapons and urged the government to act “quickly and decisively” to avert significant risks from AI that could, in the worst case, cause an “extinction level threat to the human species.” The report recommended a number of actions the government should take, including limiting training AI models beyond a certain number of computer chips and outlawing the publication of AI model weights, the variables that control an AI model’s output. But experts Time spoke to said they believed it was highly unlikely that the report’s recommendations would be adopted.
Elon Musk says he will open source his AI chatbot, Grok. The billionaire said he would open source Grok, the chatbot his AI company xAI, has built. The move comes after Musk sued OpenAI, which Musk helped cofound, claiming that fellow cofounders Sam Altman, Greg Brockman, and Ilya Sutskever had reneged on early promises to open source all of the lab’s technology. As Wired explains, the fact that Grok was not an open model put Musk in an awkward position, so he had little choice but to make the model and its code and weights public.
EYE ON AI RESEARCH
The same methods underpinning LLMs are now being used to help humanoid robots learn to walk better and faster. Researchers at UC Berkeley have used transformers, the same neural network architecture that underpins large language models and chatbots, to help a humanoid robot learn to walk around San Francisco. While with an LLM, a transformer is trained to predict the next word in a sentence, in this case a transformer is trained to predict the next locomotive action a humanoid robot should take in a given physical environment. The researchers found that even when trained on only 27 hours of walking data, the AI can help a robot generalize to a real world environment. It was able to train a transformer on a combination of sensor data in simulation and then transfer the model to a real humanoid robot and have that robot successfully walk through San Francisco’s streets without ever having trained in that particular environment. The AI model can also figure out how to obey commands that it had not learned during training such as “walk backwards.” You can read more about this research here on the research paper repository arxiv.org. You can also read a somewhat related New York Times story about how Covariant, a company run by a different set of researchers, also with a UC Berkeley pedigree, that train AI models to control robots for industrial settings and warehouses. Covariant has used similar methods to achieve impressive results, shortening the time to it takes to train a robot for a new environment and allowing humans to interact with a robot using spoken language.
FORTUNE ON AI
Sam Altman’s OpenAI comes out swinging against ‘incoherent’ and ‘frivolous’ Elon Musk in new lawsuit —by Christiaan Hetzner
How investors should prioritize diverse investments in AI —by John Kell
Gen AI apps are cloning your likeness without consent—and might make you famous for all the wrong reasons —by Alexandru Voica (Commentary)
How businesses can win over AI skeptics —by Alyssa Newcomb
Team behind popular Falcon AI models unveils new startup with $20 million in funding aimed at helping companies tailor LLMs for business —by Jeremy Kahn
AI CALENDAR
March 11-15: SXSW artificial intelligence track in Austin
March 18-21: Nvidia GTC AI conference in San Jose, Calif.
April 15-16: Fortune Brainstorm AI London (Register here.)
May 7-11: International Conference on Learning Representations (ICLR) in Vienna
June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore
BRAIN FOOD
Foundation models for biology are already leading to big discoveries. The New York Times had a fascinating look at the coming wave of foundation models trained on aspects of biology—such as genetic information and cell types—which are already being used to make important discoveries. For example, one AI model, called GeneFormer, predicts genetic modifications needed to produce certain functional changes in a cell. It has been used to suggest that the expression of four genes, not previously linked to heart disease, should be inhibited in order to help restore health to diseased heart cells. It turned out, that in lab tests at least, preventing the expression of two of these four genes did return unhealthy heart cells to normal functioning and scientists are now looking at whether the methods could be used to find new treatments. Another foundation model called Universal Cell Embedding has been used to point to new possible directions for unusual cell types in different human organs. The use of AI in science in one of the most positive of the whole AI revolution. But researchers also caution that the models are not always right and, like today’s chatbots, they can sometimes hallucinate or fail in highly unpredictable ways.