AI Made Friendly HERE

Experts warn govt’s AI regulations carry risks | Information Age

Researchers and business figures are cautiously welcoming the government’s proposal of economy-wide regulations for artificial intelligence, but many are wary of the technology’s rapid development and unknown risks, while others are concerned the federal rules could be a drag on Australia innovation and investment.

Minister for Industry and Science, Ed Husic, revealed the plan for European Union-style regulations on Thursday which included mandatory guardrails for high-risk uses of AI and a Voluntary AI Safety Standard which he encouraged organisations to implement.

Citing research which showed Australians wanted stronger protections and guidelines on AI, Husic said, “we need more people to use AI and to do that we need to build trust.”

Erica Mealy, a lecturer in computer science at the University of the Sunshine Coast, said encouraging more people to use AI was “particularly dangerous against the backdrop of significant potential for harm in areas of cyber security, fraud, automation bias [the favouring of decisions from automated systems], and discrimination”.

“While I welcome the need to protect Australians and our businesses, telling us to use the technology without educating people on when, when not, and how [or] how not to utilise it puts Australians at further risk,” she said.

Dr Carolyn Semmler at the University of Adelaide said while she welcomed the government’s proposal of mandatory guardrails for high-risk AI, there was still “a large chasm” between our understanding of AI models and how they worked in practice.

“What is most concerning is that there is a lack of evidence regarding the types of risks that become evident when AI systems are implemented,” she said.

Impacts on innovation and Australian businesses

Bran Black, chief executive of the Business Council of Australia, argued businesses should be encouraged to use AI to increase productivity, and said too much regulation could “potentially impact our ability to be internationally compatible”.

“We acknowledge the need for a regulatory framework with respect to AI, but we should reject the idea that more regulation means better outcomes — ultimately it’s critical that we apply great scrutiny to any potential new regulations to ensure protections don’t unreasonably come at the expense of innovation,” he said.

Joel Delmaire, chief product officer at recruitment software company JobAdder, said for firms which operated internationally, like his own, “complying with these regulations can disadvantage them compared with companies that don’t have such rules in their home market”.

“The EU has taken a prescriptive and punitive view on AI regulation, which is likely to discourage investment and innovation, whereas the US has taken a more principle-based approach, but it lacks any teeth,” he said.

“The UK today seems to be the best middle ground and offers a framework that can adapt better to innovation.”

Minister for Industry and Science, Ed Husic, revealed the plan for EU-style regulations on Thursday. Photo: Parliament House / YouTube

Some experts said that despite the government’s proposed regulations, Australian businesses and civil society would still struggle to keep up with Big Tech firms in AI.

Dr Daswin De Silva, a professor of AI and analytics from La Trobe University, said guardrails for accountability and regulatory compliance were “likely to put off many small and medium-sized enterprises looking to adopt AI … instead of encouraging innovation and responsible adoption”.

De Silva argued low and medium-risk implementations of AI needed to be better explained by the government and its guardrails needed to be simplified “for an average Australian organisation”.

“If not, we will end up in a further round of consultant-speak that does not capitalise on the opportunities of AI for all Australians,” he said.

Others, such as Alex Jenkins, director of the WA Data Science Innovation Hub at Curtin University, said a lack of immediate mandatory guidelines for businesses risked “significant gaps in accountability”, but could also give local startups “the freedom to experiment and grow without excessive regulation”.

“They also provide businesses with confidence when dealing with international markets that have stricter AI laws, helping to ensure compliance and smooth interactions,” he said.

“However, for Australian consumers, the lack of mandatory oversight leaves open the possibility of harm from AI systems, making it clear that stronger regulations may still be needed to protect public interests while supporting innovation.”

The federal government said it was seeking feedback on types of high-risk AI which may need to be banned in Australia, following in the footsteps of the EU.

‘The biggest risk is that Australia misses out’

Toby Walsh, a member of the AI Expert Group whose work helped inform the government’s proposed regulations, said he welcomed mandatory guardrails but argued the government was underinvesting in AI.

He said it meant “the biggest risk is that Australia misses out” on capitalising on the technology.

“Compared to other nations of a similar size, like Canada, we are not making the scale of investment in the fundamental science,” he said.

Responding to questions from journalists in Canberra on Thursday about the government’s level of funding for AI, Minister Husic pointed to Labor’s promised $1 billion investment in critical technologies as part of its National Reconstruction Fund, including AI.

He also argued there was “a lot of private investment” in AI both in Australia and internationally.

The government was previously criticised for investing $39.9 million over five years in the latest federal budget towards developing AI policies and capabilities.

Walsh, who is a member of the AI Ethics Committee at ACS (the publisher of Information Age), said he was also concerned by the speed of Australia’s regulatory developments.

“Good regulation takes time. The EU started on their journey to regulate AI over five years ago, and the EU’s AI Act is now just coming into force,” he said.

“So, while guardrails are to be welcomed, I am concerned how long it is going to take to get them in place.”

Professor Chennupati Jagadish, the president of the Australian Academy of Science, said his organisation believed Australia could lead in AI if the government scaled up its investments in AI science and high-performance computing facilities.

Associate Professor Niusha Shafiabady from the Australian Catholic University said while regulation was made difficult by private AI companies being “protective of their data, algorithms, and methods”, Australia also needed to bolster its AI workforce.

“A lot of the proposed regulation requires knowledge and expertise that we just don’t have,” she said.

“If we want to have safer AI, we need to start investing in training the experts.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird