The United States and the European Union are expecting to release a voluntary code of conduct on artificial intelligence (AI) technologies “very, very soon”, according to officials.
European Union (EU) tech chief Margrethe Vestager and US Secretary of State Anthony Blinken have said the two political forces are close to publishing a set of recommendations for the ethical development of AI technologies.
The politicians made the announcement on Wednesday (31 May), speaking at a meeting of the EU-US Trade and Technology Council.
“We need accountable artificial intelligence,” Vestager said. “Generative AI is a complete game changer.”
“We think it’s really important that citizens can see that democracies can deliver,” she added, hoping “to do that in the broadest possible circle – with our friends in Canada, in the UK, in Japan, in India, bringing as many onboard as possible.”
Vestager revealed the code could be published “within weeks”, while the US Secretary of State stressed the need for a document that would help “establish voluntary codes of conduct that would be open to all like-minded countries”.
“There’s almost always a gap when new technologies emerge,” Blinken said, with “the time it takes for governments and institutions to figure out how to legislate or regulate.”
The news responds to the increased pressure that has been put on public bodies regarding the rise in popularity of generative AI tools such as OpenAI’s ChatGPT and the potential risks they carry.
In a joint statement, the US and the EU called AI a “transformative technology with great promise for our people”, highlighting the technology’s potential to bring about economic growth.
“But in order to seize the opportunities it presents, we must mitigate its risks,” it said. “The European Union and the United States reaffirm their commitment to a risk-based approach to AI to advance trustworthy and responsible AI technologies.”
The code of conduct has been portrayed as a response to growing demands for AI regulations, and a temporary solution while countries discuss what legally binding restrictions on the technology would look like.
The EU is currently working on the AI Act, the world’s first comprehensive legislation regulating the use of AI technology. However, the legislation is unlikely to take effect before 2026.
“In the best of cases, it will take effect in two-and-a-half to three years time. That is obviously way too late,” Vestager told reporters. “We need to act now.”
In contrast, the US has not yet embarked on any significant project to regulate AI, focusing instead on calling upon CEOs of AI firms to respect their “moral” responsibility to protect society from the potential dangers of new technologies.
Many of these executives, such as OpenAI’s founder, Sam Altman, have positioned themselves in favour of increased AI regulation, proposing measures like the creation of a US or global agency that would provide licenses for companies that aim to develop AI tools.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said technology leaders, including Altman, in a joint statement published on Tuesday 30 May.
Generative AI tools can generate text in response to a prompt, including articles, essays, jokes and even poetry. A study published in January showed ChatGPT was able to pass a law exam, scoring an overall grade of C+. However, governments and experts have raised concerns about the risks these tools could pose to people’s privacy, human rights and safety.
Last week, representatives of the G7 nations stressed the need to establish global rules for generative AI tools “in line with our shared democratic values” and announced the creation of a new AI working group.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.