This analysis is by Bloomberg Intelligence Industry Analyst Tamlin Bason. It appeared first on the Bloomberg Terminal.
The EU has moved closer to enacting the first comprehensive regulations on artificial intelligence, though the proposal remains open for negotiation and may not come into force until 2025 or later. Large tech companies heavily investing in AI development could attract the greatest government scrutiny. There remains some risk that AI’s advancement in the region will be curtailed unless the rules are loosened.
Though the EU’s heavy-handed approach to tech regulation looks likely to continue with the AI Act, we still see room for a middle ground on restrictions on generative AI like ChatGPT. Alphabet, Amazon, IBM, Meta, Microsoft and Nvidia are among the largest investors in AI development.
Europe moving toward risk-based AI rules
The EU’s general approach to AI regulation will be to bucket systems into categories based on the types of risks the applications pose. The regulation will outright ban “unacceptable risk” applications, which include use of AI for things such as social scoring and biometric identification. For “high-risk” applications, which include large platforms’ recommendation systems, a multistep process to get approval is envisioned, and any significant changes to the model would require another round of approvals. Many enterprise uses of AI could fall into the high-risk category, posing the risk that compliance costs devour efficiency gains.
Limited risk applications such as chatbots would simply require disclosure, while minimal risk applications like the use of a spam filter wouldn’t have restrictions.
Generative AI restrictions might be tempered
Developers of high-level generative AI (“GAI”) systems such as ChatGPT could choose to retreat from the European market absent further revisions to the EU’s proposed AI regulations. Mentions of GAI and ChatGPT by European companies exploded this year, suggesting strong demand to leverage the new technology in operations. Yet on June 14 the European Parliament adopted rules that would subject developers of GAI models to a raft of additional safeguards, including transparency rules related to the data sets used to train the models.
Earlier versions of the rule lacked restrictions on GAI systems. The inclusion — following the rollout of ChatGPT — underscores the difficulties of legislating such a fast-moving field. It’s likely the scope of regulation on GAI will continue to evolve during further talks.
Broad high-risk classification threatens AI efficiencies
The EU may regulate AI that’s tied to anything from a social media platform’s recommendation systems to employment management tools, such as resume-sorting software, and credit and exam scoring, designating them high-risk applications. Such systems would need a conformity assessment and to be registered before their placement on the EU market. This could potentially frustrate innovation in the region, delay the market entry of new efficiency applications and raise compliance costs in a way that erodes stakeholders’ aim for wide-scale AI adoption.
The European Commission will be tasked with identifying and preparing guidance on which systems may pose a high risk. There’s a strong likelihood that categories will be narrowed as the European Council, parliament and commission finalize the legislation.
Threats of fat fines may not chill AI investments
The AI Act could impose fines of as much as 6% or 7% of a company’s global revenue (the European Commission’s cap was 6%, while parliament raised it to 7%) vs. the 4% maximum threshold allowable under the General Data Protection Regulation. In the first five years of enforcement, GDPR resulted in cumulative fines of almost €4 billion, with penalties on Meta 64% of that total, Amazon 19% and Google 5%.
Large technology platforms are among the biggest investors in generative AI, and as such they could once again be in the EU’s enforcement crosshairs. Even so, we don’t believe the threat of future penalties will deter near-term investments given companies’ potential to shape the AI market.
Compliance still years away
The AI Act passed a plenary vote by the European Parliament on June 14, paving the way for trilogue negotiations with the European Council and commission. There’s no timeline for how long negotiations will take, though there will be considerable pressure to finish the process by year-end, ahead of the May 2024 parliamentary elections. Even if the AI Act does take effect by late 2023, there would be a 2-3 year transition period. As a result, compliance might not be expected before late 2025 at the very earliest and more likely not until 2026.
EU policymakers are pushing for a voluntary code of conduct as a stopgap measure before the AI Act becomes operative. If that happens, we expect industry leaders to embrace those principles as the framework could help shape the scope of regulation.