AI Made Friendly HERE

Business Reporter – AI & Automation

Rosanne Kincaid-Smith at Northern Data Group explains why artificial intelligence regulators must prioritise innovation after the Seoul summit

 

In May, the Republic of Korea and the UK co-hosted the AI Seoul Summit, gathering international governments, AI companies, academia and civil society to discuss AI safety. The conference built on the UK event held in November 2023, in which major outcomes included the Bletchley Declaration: an international agreement to research, understand and mitigate the risks posed by AI.

 

In the months since, 16 companies have signed up to the voluntary standards introduced at the Bletchley Park Summit. Meanwhile, the EU AI Act, the world’s first comprehensive AI law, is starting to come into force.

 

Change is happening– but what do more rules mean for the technology’s development?

 

Authority vs autonomy 

The moves by governments and unions to set guardrails around AI prove what many of us have long been thinking: there’s no question that the technology needs regulation.

 

What is still up for development, however, is the who and how of this management and supervision. After all, governments can promote equity and safety, but are they equipped with the technical know-how to foster innovation?

 

Meanwhile, private organisations may possess the practical knowledge, but can we trust them to ensure accessibility and fairness? The main aim of AI must be to unlock unprecedented positive opportunities – from process automation to scientific breakthroughs – that achieve societal progress for all. 

 

This debate formed the epicentre of the AI Seoul Summit. So, let’s explore the pros and cons of regulation, and how to uphold the importance of innovation amidst the drive for supervision. 

 

The problems with overregulation

AI is rapidly progressing, with seemingly no limits to its potential. Its pace of development, for now at least, is far greater than any lawmaker’s ability to create effective legislation.

 

Unfortunately, its advancement creates a paradox when it comes to rulemaking. If regulations are too specific, they’ll likely become outdated by the time they come into force. But if the rules are therefore flexible and adaptable, are they then too vague to make any real impact? 

 

Safety is, of course, paramount, so we need some sort of regulation that is practical and pragmatic. However, lawmaking by those not at the forefront of AI’s development may hinder the technology’s progress by creating barriers and bureaucracy, especially for SMEs that might not have the necessary resources to comply.

 

Overregulation may also limit positive new ideas and players from entering the market. And ultimately, slow progress, as demonstrated by the UK’s multiple readings in the House of Lords and House of Commons process, might not be fit for the technology’s ultra-rapid progression.

 

If governments want to take the reins, they need to prove their openness to new ways of thinking and ensure they are partnering with people who understand the technology it’s risks to develop regulation that is fit-for-purpose. 

 

Governments are adapting, but is it enough?

In 2023, the UK government released a paper titled ‘A pro-innovation approach to AI regulation’. In it, policymakers state they will bring clarity and coherence to the AI regulatory landscape, designed to make responsible innovation easier.

 

However, it’s important to remember that this approach aims to strengthen the UK’s position as an AI leader – rather than encourage innovation equally around the world. Governments have shown they can work together, as highlighted by the recent UK-US agreement on AI safety. But can the US ever agree a similar pact with China, for instance?  

 

Instead, many industry experts have advocated self-regulation. Alongside the need for agility in response to the technology’s evolution, they claim that outsiders who are unfamiliar with AI’s intricacies won’t ever be suited to drafting effective guardrails and may even serve to stifle its development.

 

But then again, governments regulate industries from pharmaceuticals to nuclear power and there is little talk of suppressed innovation there. And can self-regulation ever be truly safe without some kind of overarching supervision? We need to find a middle ground.

 

It’s all about collaboration

In October, the World Health Organisation released new regulatory considerations on AI for health. They highlight the positive progress in healthcare analytics alongside the potential harm of rapid deployment, but the WHO itself also serves as a good example of what may be necessary to promote safe, industry-wide innovation.

 

A similar international intergovernmental organisation, backed up by private donors and experts, can oversee public concern and promote innovation and progress within AI for all. 

 

Ultimately, regulations that power the next era of global AI innovation must be in the hands of a mixture of technical and societal bodies that truly understand AI’s impact. Policymakers must turn to industry experts and scientists to uncover what is truly needed to safely regulate the industry, without hindering innovation.

 

Only then will we see the technology’s true potential – safe in the knowledge that each step forward has been comprehensively evaluated by those in the know.

 

 

Rosanne Kincaid-Smith is COO at Northern Data Group

 

Main image courtesy of iStockPhoto.com and PhonlamaiPhoto

 

Originally Appeared Here

You May Also Like

About the Author:

Early Bird