In this open letter, researchers, think tanks, civil society, and ethics representatives defend the Spanish approach to risk management, putting the responsibility on developers, which, the letter argues, is a bare minimum to protect EU citizens and industry.
To European legislators,
As European stakeholders spanning SMEs, workers, consumers, citizens, think tanks, AI digital rights & ethics, risks and privacy organisations, we write this letter to support the efforts of the Spanish presidency in addressing the unique risk management challenges posed by foundation models through a tiered approach within the AI Act.
Europe’s regulatory leadership is an asset that should be valued
Amidst a growing number of global AI governance non-regulatory efforts to manage foundation model risks, such as the G7 Code of Conduct, the UK and US AI Safety Institute, and the White House Executive Order on AI, Europe stands in a unique position to enforce the first horizontal regulation on AI, including foundation models with a global footprint. As such, the proposal of the Spanish presidency allows a balanced approach to regulating foundation models, refining the European Parliament’s cross-party position to set obligations for such models in the EU AI Act.
Foundation models as a technology present a unique risk profile that must be managed
Foundation models differ significantly from traditional AI. Their generality, cost of development, and ability to act as a single point of failure for thousands of downstream applications mean they carry a distinct risk profile – one that is systemic, not yet fully understood, and affecting substantially all sectors of society (and hundreds of millions of European citizens). We must assess and manage these risks comprehensively along the value chain, with responsibility lying in the hands of those with the capacity and efficacy to address them. Given the lack of technical feasibility and accessibility to modify underlying flaws in a foundation model when it is deployed and being adapted to an application, there is no other reasonable approach to risk management than putting some responsibility on the technology provided by the upstream model developers.
Far from being a burden for the European industry, the regulation applied to the technology of foundation models offers essential protection that will benefit the EU industry and the emerging AI ecosystem. The vast resources needed to train high-impact models limit the number of developers, so the scope of such regulation would be narrow: fewer than 20 regulated entities in the world, capitalised at more than 100 million dollars, compared to the thousands of potential EU deployers. These large developers can and should bear risk management responsibility on current powerful models if the Act aims to minimise burdens across the broader EU ecosystem. Requirements for large upstream developers provide transparency and trust to numerous smaller downstream actors. Otherwise, European citizens are exposed to many risks that downstream deployers and SMEs, in particular, can’t possibly manage technically: lack of robustness, explainability and trustworthiness. Model cards and voluntary – and therefore not enforceable – codes of conduct won’t suffice. EU companies deploying these models would become liability magnets. Regulation of foundation models is an important safety shield for EU industry and citizens.
The Spanish Presidency’s approach balances risk management and innovation
We support the proposal as a suitable compromise between the European Parliament and the Member States through a tiered approach. The proposed use of compute thresholds, easily measurable criteria that correlate well with risks, offers a practical basis for regulation, which makes sense from a risk management perspective while preserving SMEs’ AI development efforts. For future-proofing, the thresholds will have to be modified and the criteria complemented as technology evolves and as the science of measurement of risks improves, but it provides a good starting baseline. We believe this will crucially allow the EU AI Act to manage risks that European citizens are exposed to.
Resisting narrow lobbying interests to protect the democratic process
The EU AI Act has consulted for more than two years a broad range of representative stakeholders: developers, European industry, SMEs, civil society, think tanks and more. On that basis, it is crucial to prevent the vested lobbying efforts of Big Tech and a few large AI companies from circumventing this democratic process. A select few’s interests must not compromise European regulators’ ability to protect society and support SMEs.
Integrating foundation models into the AI Act is not just a regulatory necessity but a democratic imperative and necessary step towards responsible AI development and deployment. We urge all involved parties to work constructively, building on the Spanish proposal and consensus of the Trilogue of the 24th of October to find a suitable regulatory solution to benefit a safer, more trustworthy and sovereign AI landscape in Europe.
Signatories
Organisations:
Centre for Democracy & Technology, Europe, Brussels
The Future Society, Brussels
AI Ethics Alliance, Brussels
SaferAI, Paris
Open Markets Institute, Europe, Global
Privacy Network, Italy
Avaaz Foundation, Global
Pax Machina, Italy
Defend Democracy, Global
Notable Figures:
Julia Reinhardt, Fellow, European New School of Digital Studies, Europa-Universität Viadrina & among the 100 Brilliant Women in AI Ethics 2023
Yoshua Bengio, 2nd Most Cited AI Scientist, Chair of State of Science Report (an “IPCC for AI”), Professor – University of Montreal, Scientific Director – Mila Quebec AI Institute
Raja Chatila, Professor emeritus, Sorbonne University, Paris. Former member of the EU High-Level Expert Group on AI.
Francesca Bria, President of the National Innovation Italian Fund, Innovation Economist, UCL
Anka Reuel, Computer Science PhD Researcher & KIRA Founding Member, Stanford University
Gary Marcus, Founder and CEO, Geometric Intelligence, Professor Emeritus, NYU
Marietje Schaake, former MEP, International Policy Director at Stanford University Cyber Policy Center
Huberta von Voss, Executive Director, ISD Germany
Yoshua Bengio, 2nd Most Cited AI Scientist, Chair of State of Science Report (an “IPCC for AI”), Professor – University of Montreal, Scientific Director – Mila Quebec AI Institute
Wolfgang M. Schröder, Prominent Professor of Philosophy, University of Wuerzburg, German expert in CEN-CENELEC and ISO/IEC through the DIN AI Standardization Committee
Marc Rotenberg, President & Founder, Center for AI and Digital Policy
Julia Reinhardt, Fellow, European New School of Digital Studies, Europa-Universität Viadrina & among the 100 Brilliant Women in AI Ethics 2023
Raja Chatila, AI Professor Emeritus, Sorbonne University, Paris. Former member of the EU High-Level Expert Group on AI
Philip Brey, Professor of Philosophy and Ethics of Technology, Winner of 2022 Weizenbaum Award, University of Twente, The Netherlands
Nicolas Miailhe, Founder & President, The Future Society (TFS), Member of the Global Partnership on AI (Responsible AI Working Group), Member of the UNESCO High Level Expert Group on AI Ethics implementation, Member of the OECD Network of Expert on AI Governance
Max Tegmark, MIT Professor, Center for Brains, Minds & Machines, Swedish Citizen
Francesca Bria, Executive Board Member Italian public media company, Innovation Economist, UCL
Gary Marcus, Founder and CEO, Geometric Intelligence, Professor Emeritus, NYU
Anka Reuel, Computer Science PhD Researcher & KIRA Founding Member, Stanford University
John Burden, Senior Research Associate, Cambridge Institute for Technology and Humanity
Jan-Willem van Putten, Co-Founder and EU AI Policy Lead at Training for Good
Diego Albano, Analytics leader, ForHumanity, Spain
Dr. Aleksei Gudkov, AI Governance Counsellor
Greg Elliott, HZ University of Applied Sciences
Janis Hecker, NERA Economic Consulting
Alexandru Enachioaie, Head of Infrastructure at Altmetric
Mathias Ljungberg, Product Owner AI Products at Ahlsell (Swedish), Data Scientist, PhD in Physics
[Edited by Alice Taylor]