
Imagine a major concert event that you are eager to attend. Tickets go on sale soon, and you know it’s going to be a frenzy. To increase your chances, you enlist the help of a multi-agent AI system. One agent specializes in monitoring ticket vendor sites, instantly detecting when tickets are released. Another agent manages your finances, ready to purchase a ticket with your approval the moment one becomes available within your budget. A third agent analyzes seating charts in real-time, identifying the best available seats based on your location preferences. Working together, these agents navigate the complex ticket-buying process across different apps and sites, securing your spot at the event.
However, as you celebrate your successful purchase, you notice that many seats around yours were also bought within seconds. You realize that countless other fans are using similar AI systems, creating an automated race for tickets at a dizzying speed. While this technology enabled you to get your tickets, it may have also made the process more competitive and potentially excluded those without access to such tools.
This is the double-edged sword of advanced “AI agents”—autonomous systems that can understand our goals, take actions on our behalf, and collaborate with other intelligent programs to support us in entirely new ways. While these platforms have the potential to augment many aspects of our lives, from managing our schedules to enhancing our health and productivity, they’re also reshaping the very nature of how we make important decisions.
The Cost of Autonomy
Today’s AI excels at well-defined tasks with clear boundaries but often struggles when goals are open-ended or parameters are fluid. AI agents aim to bridge this capability gap. They harness the same generative abilities that make current applications so remarkable, like language and image processing, but their focus is developing systems that can make decisions on the user’s behalf to achieve specific goals, rather than simply producing outputs based on prompts. Iason Gabriel, a research scientist at Google DeepMind focusing on AI ethics, explains that these systems will “be able to perform tasks without [continued] human supervision, over longer time periods and in more complex domains.”
The key difference lies in how the programs reason about tasks and the actions they are empowered to perform. Large language models and chatbots are typically trained to recognize patterns and generate outputs based on those findings. In contrast, agents are built on top of large language models but are then augmented with the capacity to develop strategies and adjust future actions accordingly.
The Alignment Challenge
What is required for these systems to work and to avoid the negative aspects of the concert-booking scenario is alignment with a clear set of principles that take into account the interests of the various parties involved, including society as a whole. “An agent should be helpful to the user, but not at the expense of harming others,” says Gabriel. Equally, however, individuals must also retain a degree of freedom when it comes to using advanced AI, free from inappropriate interference, either by developers or wider societal interests.
An agent should be helpful to the user, but not at the expense of harming others.Iason Gabriel research scientist at Google DeepMind
This values-focused approach will become increasingly important as AI agents become more sophisticated and ubiquitous, interacting with an ever-expanding circle of people, each with their own beliefs, vulnerabilities, and expectations. In making these decisions, these systems would need to weigh a multitude of factors dependent on the goals and needs of the person using them, but also the societal impacts and considerations of the specific application.
A career coaching agent, for instance, may need to consider not only a person’s skills and preferences but also the long-term societal impact of the professions it recommends. Such an agent might analyze labor market trends and job growth projections while also taking into account the individual’s aptitudes, values, and life circumstances. It would need to reconcile the person’s short-term needs with long-term fulfillment and answer thorny questions, such as how best to balance salary ambitions against broader economic sustainability.
Addressing this immense complexity while ensuring alignment with human values will require input from a wide range of stakeholders beyond the technology sector. “These questions are really things that we, as a society, need to work together to tackle,” Gabriel says, “which will involve collaboration across industry, governments, civil society, and academia.”
We’re already seeing promising examples of this multi-stakeholder approach. Google, for example, partnered with Anthropic, Microsoft, and OpenAI to launch the Frontier Model Forum, an industry consortium dedicated to advancing the secure development of cutting-edge AI systems. Through this collaboration, companies share research findings, develop safety standards, and create protocols for responsible AI development. Google has also developed a set of protocols to help prevent possible risks from powerful frontier AI models, known as the Frontier Safety Framework Additionally, in September 2023, Google launched the Digital Futures Project and a $20 million Google.org fund to support researchers, organize convenings, and foster debate on public policy solutions to encourage the responsible development of AI.
Coordinated Intelligence
Eventually, AI agents will need to interact with one another, not just within static systems like ticket booking services. Thousands of agents could be simultaneously trying to book flights or secure medical appointments. Without proper coordination, this could lead to multiple AI agents competing for the same resources or working in ways that overwhelm systems.
To prevent such scenarios, researchers like Gabriel and his team at Google DeepMind are developing frameworks for agent cooperation. First, the agents must be able to communicate with one another, sharing relevant information about their environment, objectives, and intentions. Second, they need to be able to reason about each other’s capabilities, limitations, and likely actions, a concept known as “theory of mind”—essentially, the ability to understand and predict other agents’ behavior.
With these building blocks in place, it becomes possible to envision a future where billions of AI agents operate across all strata of society, collaborating with their users and with other agents, and with each type of interaction having specific protocols.
This emphasis on cooperation represents a fascinating possibility: While groups of people may intend to collaborate with each other in the long term, they may encounter challenges due to ego, politics, and information hoarding, whereas a cohort of AI agents could be designed to work together more reliably.
Navigating these intricacies will require careful forethought and planning. At the development stage, agentic AI must also have an “inductive bias” (a set of built-in assumptions or preferences that guide the AI’s decision-making) toward supporting the collective good, explains Gabriel. “For example, AI agents might be designed to cooperate in certain ways with other AI agents that behave in a reciprocal manner and have the right kind of credentials.” Legal and policy frameworks must also evolve and be put in place before the broad adoption of these systems to prevent misuse or access to certain systems or data that should not be accessed by agents.
This emphasis on cooperation represents a fascinating possibility: While groups of people may intend to collaborate with each other in the long term, they may encounter challenges due to ego, politics, and information hoarding, whereas a cohort of AI agents could be designed to work together more reliably. They might help people organize and share resources more fairly, make decisions more transparently, and ultimately achieve better outcomes.
Importantly, developers and policymakers alike must investigate the social and ethical consequences of these systems ahead of deployment. This knowledge should be integrated into both the design of the technology and the rules that govern its use.
Enhancing Human Agency
These broader societal considerations must be balanced with human needs. “We need to design mechanisms that help ensure these interactions are beneficial and support the autonomy of users,” Gabriel says.
Ensuring access is a difficult task as well. While philosophical debates and legal frameworks may not need to be fully resolved for progress to be made, AI agents must be built in a way that makes them widely accessible. This means considering the diverse needs of users across different backgrounds, language groups, and accessibility requirements to create an inclusive technology.
We live in a highly complex virtual world. We are often overscheduled and over-notified. AI agents represent a paradigm shift for personal computing and can simplify how we get things done. Done right, they could understand a user’s values, act as a translator, and carry out their needs in the virtual world. Ideally, this means we can do more while interacting with fewer surfaces.
Ultimately, this holistic approach is key to realizing the potential of AI agents as tools for elevating how we, as humans, experience life. “One way to think of this technology is as an ‘agency enhancer’,” Gabriel says. “The point is that by using AI agents, we can do more of the things we want to do.”