
A new Pew Research Center report examines attitudes about artificial intelligence (AI) among the U.S. public, as well as AI experts. The report is based on a pair of surveys that show that the public is far less positive and enthusiastic about AI than experts are. At the same time, similar shares in both groups want to see more control and regulation of the technology.
In this Q&A, we speak with Brian Kennedy, a senior researcher at the Center, on why and how the Center conducted the survey of AI experts to accompany the survey of the broader public.
Why compare the public’s views of artificial intelligence with the views of experts?
Brian Kennedy, senior researcher focusing on science and society.
Pew Research Center has a long track record of studying emerging technologies. In 2021, we embarked on a multiyear effort to study the public’s attitudes and experiences with artificial intelligence. Since then, we’ve looked at Americans’ hopes and worries around AI, including their views on driverless cars and whether they think algorithms should be used in hiring.
In our latest study, we also wanted to learn the views of those who have expertise in the field. The experts we surveyed include people who work on or study the development, application and implications of AI.
Understanding the views of both these groups – the public and experts – is central to the discussion around the potential benefits and risks of AI. We think it is important to understand how the views of the public compare with those of experts. Where do they see eye to eye? Where are there deep divides?
How did you define “AI expert” in this study?
An AI expert in this study is someone who demonstrates expertise in AI or related fields via their work or research. We included people with expertise in technical topics – such as machine learning or natural language processing – and other topics related to AI, including its business applications, social impacts and ethics.
Who are the AI experts you surveyed?
One challenge with this study is that we needed a way to identify AI experts to survey. To get a broad group of experts, we built a sample of people who have participated in AI-related conferences as presenters or authors.
We created a list of 21 conferences that took place in 2023 or 2024 and covered a variety of AI-related topics so we would capture a range of perspectives among AI experts. The list included conferences focused on technical AI research; social science about AI; the representativeness and ethics of AI; the business of AI; and the specific applications of AI in health care, finance and government.
Understanding the views of both these groups – the public and experts – is central to the discussion around the potential benefits and risks of AI.
– Brian Kennedy, Senior Researcher
One concern we had going into the project was whether our sample of AI experts would represent many different perspectives. We knew from our own work on the STEM workforce that women, Black and Hispanic workers make up smaller percentages of people with computing jobs compared with their shares of the overall U.S. workforce. Related studies have found that these groups are underrepresented among those who earn computer science degrees and in occupations that are or could be working in AI.
With this in mind, we tried to reach these less-represented groups when we put together our list of conferences. For example, our list included the affinity group meetings and mini-conferences at the Conference on Neural Information Processing Systems.
We also created a large enough sample of experts from the conferences we examined to look at differences by gender on how they feel about AI. And we’re glad we did: One striking finding from the study is that men and women AI experts aren’t always aligned in their views.
After we created our list of conferences, we created a list of everyone who was an author of a paper or presented at each conference. We then tried to find the email address of everyone we identified, ultimately tracking down the vast majority of them. We also decided to only survey experts who live in the United States to make it more directly comparable with our accompanying survey of the American public. (For more information on the makeup of our AI expert sample, read the appendix table in our new report.)
In addition to surveying AI experts for this study, you did in-depth interviews with some of them. Why?
That’s right. As part of this study, we conducted 30 in-depth interviews with a range of experts who also participated in the underlying survey.
We did this to allow our expert participants to express their views on a number of topics with more nuance, and in their own words. Some of these topics included AI’s impact on society today and in the future, representation and bias in AI, and regulation of AI. We included quotes from these in-depth interviews throughout the report.
Do the views of the AI experts in this study represent the views of all AI experts?
No. The responses of AI experts are only representative of the views of the experts who responded to the survey. Since there is no definitive source of the makeup of AI experts, we cannot be certain that all segments of this population are represented appropriately in the sample. This is different from Center surveys of U.S. adults in which we know the characteristics of the population, and can use weighting to make the survey representative. The results for the AI experts are unweighted.
The in-depth interviews with AI experts also aren’t representative of the AI expert population or any demographic group. Instead, they provide views that are more detailed than we could capture in the survey.
By contrast, our survey of the general public is representative of the views of U.S. adults. It was conducted on the Center’s American Trends Panel (ATP). Members of the ATP are recruited through national, random sampling of residential addresses. You can read more about the ATP’s methodology here.
Read the report: How the U.S. Public and AI Experts View Artificial Intelligence