It’s been a rough start to the year for news media. Record layoffs in January have piled on top of years of decline at newspapers, especially at the local level. The explanations are diverse and complicated, but near the top is the impact of online platforms like social media—which have siphoned off eyeballs and advertisers. News publishers could be forgiven for believing the rise of generative AI is just one more tech development out to kill them.
After all, it’s easy to see the rise of so-called “large language models” (LLMs), like ChatGPT, Bard, and Gemini, as an existential threat—giant tools, trained on every word on the internet ever written, that can churn out writing requests on a whim, instantaneously. The robots have arrived to destroy all the journalism that lagging subscriptions, private equity, and the rise of TikTok haven’t already killed.
That would be a mistake.
Generative AI is an epochal development—less like social media and more like the advent of the internet itself. Much like that moment, this technology is transformative because it empowers people in how they create and find information. ChatGPT was the fastest-growing application of all time (before Threads stole the title) in part because of the agency it gave people in not just retrieving answers, but being able to shape the responses with creative prompts.
We are witnessing a major shift in user behavior as this new technology becomes a regular part of people’s educational, professional, and personal routines. This isn’t only about efficiency and automation that will be valuable in all parts of the news business; it’s also a moment of profound empowerment in the information age.
It took news publications a long time—too long, we now recognize—to embrace the Web as an entirely new interactive medium, one that could enhance every aspect of the news industry, from reporting and multimedia to delivery and engagement. They ignored the pace of publishing and community that bloggers introduced until finally embracing it because of its popularity with fast-growing audiences. It took in many cases a decade—or longer!—for news organizations to begin the embrace of the Web on new terms.
This time, the news industry doesn’t have that long to resist the AI revolution and ignore the shift in consumer expectations. The tremendous speed of change since ChatGPT set off a generative AI arms race requires us to move now. Failure to do so may mean ceding the future of news discovery and creation to new tech players. Startups and large technology players are already experimenting with extractive tools that mash up the work of news publishers’ reporting into new “articles.”
The good news is this time, news organizations can be early adopters if they choose.
The large, established platforms are scrambling to keep up with the upstarts. There will be several rounds of disruption. Generative AI doesn’t belong to one or two companies—it’s a technology that is rapidly evolving in both closed and open environments. Smart news organizations are not only negotiating over access to their content to train LLMs, they are also adopting and experimenting with this new technology.
There are real challenges that this new technology will unleash and significant questions to answer about how to install guardrails that reduce potential harms to our audiences, our organizations, and our business models. AI-generated misinformation is already starting to flood our feeds and mislead citizens during this shaky moment in history. But the best way to solve these issues is to engage with the technology and to learn—to be educated advocates, rather than fossilized followers. Along the way, and into an AI-heavy future, we must continue to deliver quality reporting in the public interest that both harnesses the advantage that AI brings and helps drowns out the bad.
We believe that no one news organization or publication can succeed on its own in this moment—there are too many experiments to conduct, too much change to manage, too many threats and ethical thickets to confront all at once alone. Instead, to succeed, our industry must come together to share, align, and advocate. Newsrooms are experimenting, but there’s too little collaboration. Aspen Digital, a program of the Aspen Institute, with the support of the Siegel Family Endowment as well as the Patrick J. McGovern Foundation and others, is beginning work to align the industry around key questions, best practices, and ethical guidelines.
We recognize that news organizations compete with one another. There will be some aspects of the work they will not share with their peers. Newsrooms are often too fiercely independent to fall in line with any industry standards. Large news organizations that have the resources will always look to innovate, but this moment requires that we collaborate on how we lead the way on a healthy, valuable information ecosystem for the future.
From conversations with newsroom leaders and executives, we have identified seven areas that news organizations are grappling with:
- AI in the newsroom
How will generative AI change the way newsrooms work? Will current roles change? Will there be new roles entirely? How should reporters use generative AI tools for gathering news and drafting copy? Generating images?
- Intellectual property and large language models
How should the content of news organizations be used in training large language models? How should publishers be compensated? Should publishers block their content from being crawled by LLMs? What impact might that have on generative AI outputs? Should publishers create their own LLMs?
- Business model transformation
What will the impact of generative AI by large tech platforms and disruptive startups be on current publisher business models? How should publishers think about the rapidly decreasing number of referrals from search engines, especially with the introduction of generative AI? What are the new opportunities for revenue generation?
- Protecting against bias
Much of the information used to train LLMs was already biased—purposefully or inadvertently. What role should publishers play in making sure that the outputs of generative AI used in journalism don’t continue to perpetuate those biases? How can newsrooms work to reduce the bias in content used to train future models?
- Technology and products for the consumer
In addition to how newsrooms are using generative AI in their reporting and news production processes, many publishers are starting to experiment with how the technology allows them to deliver new or improved products to their audiences. What type of products should news organizations be launching and testing to meet the changes in consumer behavior that generative AI is driving? How much risk should publishers take in building direct-to-consumer AI products, and can they compete with technology companies?
- Editorial guidelines to build trust
Many publishers have already started developing clear internal guidelines for how generative AI should be used in the newsroom; some have shared these publicly. What are the best practices for these guidelines? How can transparency about the use of generative AI improve or even harm trust with audiences?
- Public policy
Hearings are already being held in the Senate regarding generative AI and journalism. The White House introduced a regulatory framework called the Blueprint for an AI Bill of Rights. What regulations around the use of generative AI should publishers advocate? Is there a consensus on the most important topics?
We will be convening academic researchers, technologists, experts, and practitioners, from both national and local news organizations, to listen and learn how to best frame the topics and to start developing a vision for the future that we can align and act upon.
We look forward to sharing our progress and helping to light the path forward.
Vivian Schiller and Trei Brundrett