AI Made Friendly HERE

Australia’s Gene Tech Regulation: Model for AI Policy

Since 2019, the Australian Department for Industry, Science and Resources has been striving to make the nation a leader in “safe and responsible” artificial intelligence (AI). Key to this is a voluntary framework based on eight AI ethics principles, including “human-centred values”, “fairness” and “transparency and explainability”.

Authors


  • Julia Powles

    Associate Professor of Law and Technology; Director, UWA Tech & Policy Lab, Law School, The University of Western Australia


  • Haris Yusoff

    Research Associate at UWA Tech & Policy Lab, The University of Western Australia

Every subsequent piece of national guidance on AI has spun off these eight principles, imploring business, government and schools to put them into practice. But these voluntary principles have no real hold on organisations that develop and deploy AI systems.

Last month, the Australian government started consulting on a proposal that struck a different tone. Acknowledging “voluntary compliance […] is no longer enough”, it spoke of “mandatory guardrails for AI in high-risk settings”.

But the core idea of self-regulation remains stubbornly baked in. For example, it’s up to AI developers to determine whether their AI system is high risk, by having regard to a set of risks that can only be described as endemic to large-scale AI systems.

If this high hurdle is met, what mandatory guardrails kick in? For the most part, companies simply need to demonstrate they have internal processes gesturing at the AI ethics principles. The proposal is most notable, then, for what it does not include. There is no oversight, no consequences, no refusal, no redress.

But there is a different, ready-to-hand model that Australia could adopt for AI. It comes from another critical technology in the national interest: gene technology.

A different model

Gene technology is what’s behind genetically modified organisms. Like AI, it raises concerns for more than 60% of the population.

In Australia, it’s regulated by the Office of the Gene Technology Regulator. The regulator was established in 2001 to meet the biotech boom in agriculture and health. Since then, it’s become the exemplar of an expert-informed, highly transparent regulator focused on a specific technology with far-reaching consequences.

Three features have ensured the gene technology regulator’s national and international success.

First, it’s a single-mission body. It regulates dealings with genetically modified organisms:

to protect the health and safety of people, and to protect the environment, by identifying risks posed by or as a result of gene technology.

Second, it has a sophisticated decision-making structure. Thanks to it, the risk assessment of every application of gene technology in Australia is informed by sound expertise. It also insulates that assessment from political influence and corporate lobbying.

The regulator is informed by two integrated expert bodies: a Technical Advisory Committee and an Ethics and Community Consultative Committee. These bodies are complemented by Institutional Biosafety Committees supporting ongoing risk management at more than 200 research and commercial institutions accredited to use gene technology in Australia. This parallels best practice in food safety and drug safety.

Third, the regulator continuously integrates public input into its risk assessment process. It does so meaningfully and transparently. Every dealing with gene technology must be approved. Before a release into the wild, an exhaustive consultation process maximises review and oversight. This ensures a high threshold of public safety.

Regulating high-risk technologies

Together, these factors explain why Australia’s gene technology regulator has been so successful. They also highlight what’s missing in most emerging approaches to AI regulation.

The mandate of AI regulation typically involves an impossible compromise between protecting the public and supporting industry. As with gene regulation, it seeks to safeguard against risks. In the case of AI, those risks would be to health, the environment and human rights. But it also seeks to “maximise the opportunities that AI presents for our economy and society”.

Second, currently proposed AI regulation outsources risk assessment and management to commercial AI providers. Instead, it should develop a national evidence base, informed by cross-disciplinary scientific, socio-technical and civil society expertise.

The argument goes that AI is “out of the bag”, with potential applications too numerous and too mundane to regulate. Yet molecular biology methods are also well out of the bag. The gene tech regulator still maintains oversight of all uses of the technology, while continually working to categorise certain dealings as “exempt” or “low-risk” to facilitate research and development.

Third, the public has no meaningful opportunity to assent to dealings with AI. This is true regardless of whether it involves plundering the archives of our collective imaginations to build AI systems, or deploying them in ways that undercut dignity, autonomy and justice.

The lesson of more than two decades of gene regulation is that it doesn’t stop innovation to regulate a promising new technology until it can demonstrate a history of non-damaging use to people and the environment. In fact, it saves it.

The UWA Tech & Policy Lab receives funding from nationally competitive research grants and philanthropic partners. The present research was supported by GA308883: Effective Ethical Frameworks for the State as an Enabler of Innovation, funded by the Department of Foreign Affairs and Trade.

Julia Powles is the Director of the Lab and has served as an independent member of the National AI Centre’s Think Tank on Responsible AI, the Australian Government’s National Robotics Strategy Advisory Committee, and the Advisory Panel supporting the Australian Parliamentary Inquiry into the Use of Generative AI in the Australian Education System. Through each of these bodies, she has provided advice on comparative AI regulation.

Haris Yusoff does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).

Originally Appeared Here

You May Also Like

About the Author:

Early Bird