The federal government has proposed European Union-style regulations for artificial intelligence including 10 mandatory guardrails for high-risk uses of the technology and possible bans for systems which may present a danger to society.
Minister for Industry and Science, Ed Husic, revealed the plan on Thursday as the government released a discussion paper to begin its next round of consultations.
“This is probably one of the most complex policy challenges facing the government world over,” Husic told a press conference.
“The Australian government is determined that we put in place the measures that provide for the safe and responsible use of AI in this country.”
High-risk uses of AI should be designed and tested in a way which mitigated their risks, but also enabled “human control or intervention in an AI system to achieve meaningful human oversight”, the government proposed in its discussion paper.
Husic said generative AI — which could be used to generate anything from images to video, audio, and computer code — was “more likely to be high-risk”.
Another example he provided was when AI was used in recruitment settings.
“AI has been shown to make bad decisions that discriminate against people based on their race, gender, or age,” Husic said.
Organisations which developed or used high-risk AI systems should inform end users about when and how AI was used, be transparent about the data and models involved, and allow “people impacted by AI systems to challenge [its] use or outcomes”, the government’s proposal said.
“This is going to take some time to implement, and we’ll consult over the next four weeks about these proposed guardrails,” Husic said.
“After that, we’re going to decide on the best legislative approach to take — that could include updating current legislation or bringing in an Australian AI Act.”
The possibility of fines or repercussions for organisations not following the rules would also be discussed during consultations, Husic said.
The opposition argued that another round of consultations showed “further evidence of Labor leaving this issue in the too hard basket”.
Coalition MPs Paul Fletcher and David Coleman said: “Of course we need to be alive to the risks associated with this technology and its implications on legislation and regulations, but the Albanese government must also provide leadership and start making decisions.”
The government’s Thursday announcement also included a Voluntary AI Safety Standard to give all Australian businesses and organisations a guide for their uses of AI technologies.
“This gives them the time to prepare for the mandatory guardrails, and it will give Australians peace of mind that protections are being put in place,” Husic said.
Australia’s National AI Centre found around 80 per cent of businesses believed they were implementing AI correctly, but less than a third were following AI best practice, Husic noted.
“Anyone who thinks that they can close their eyes, or that we don’t need to act in this space — I think that’s fanciful,” he said.
Today’s announcements were the culmination of the government’s work with the local technology industry, as well as an AI expert group it first convened in February.
Individuals or groups which made submissions to the government regarding its proposal for mandatory AI guardrails have been asked if they would consent to AI being used to analyse their responses and personal information, although it was not compulsory.
It comes after documents released publicly this week showed an exploratory trial of AI summarisation by Australia’s financial services regulator earlier this year found a different model tested by that agency was outperformed by humans.
Powerful AI with ‘unacceptable risk’ could be banned
While Australia previously appeared to be steering clear of outright bans of some AI technologies, the government’s proposal for mandatory guardrails of high-risk AI stated that authorities were seeking feedback on “types of AI use that could present an unacceptable level of risk in Australia and should be banned”.
The EU’s AI Act, which came into force on 1 August, included provisions to ban AI applications which were deemed an unacceptable risk, such as systems which carried out real-time biometric identification of people in public spaces or manipulated human behaviour.
The Australian government said it recognised that general-purpose AI models (GPAI) were “the next evolution of AI” and could be used to “perform levels of human-like general cognition previously only seen in humans”.
The government pointed to US company Open AI’s GPT-n, DALL-E, and Sora models which could “generate ‘human-like’ text, images and videos based on simple user prompts”, and argued highly-advanced GPAI models could have “capacity to cause harms to people, community groups and society at a wide-scale and speed”.
The government pointed to AI video generator Sora as an example of “human-like general cognition previously only seen in humans”. Photo: Shutterstock
“As GPAI models become more powerful, and the ‘frontiers’ of AI continue to advance, it becomes more difficult to predict all the foreseeable applications and risks of AI,” the government said.
“… By the time a risk or harm may be foreseeable, it may be too late to apply preventative measures.”
The government noted that it was seeking feedback on GPAI developments and was aware that the EU prohibited certain uses of the technology.
Recent consultations on AI had shown that Australia’s current regulatory system was “not fit for purpose to respond to the distinct risks that AI poses”, the government wrote.
Jonathan Tanner, a senior director at enterprise AI company Pega, said there was a risk that loopholes would be taken advantage of amid “piecemeal implementation of legislation by different governments”.
“This technology has no respect for different jurisdictions, so a combined effort needs to be employed to ensure consistency and to prevent bad actors taking advantage of low regulation regimes,” he said.
The Australian Academy of Technological Sciences and Engineering (ATSE) welcomed the government’s proposals and called for further investment in local AI organisations.
“Investing further in local AI innovations will simultaneously create new AI industries and jobs here in Australia and reduce our reliance on internationally developed and maintained systems,” ATSE CEO Kylie Walker said.
“Local AI industries will also give the Australian government greater ability to regulate AI development in line with Australian community values and expectations.”