AI Made Friendly HERE

Our Future with AI Hinges on Global Cooperation – Sponsor Content

Opinion

To harness the power of AI for good, democratic societies need to work together.

By Kay Firth-Butterfield

Since large language models began making headlines in the fall of 2022, millions of words have been written about the dangers of AI. Those of us who work on these technologies and their implications have been talking about this since 2014, but now the conversation has gone mainstream—so much so that it risks drowning out necessary discussion of how we might use AI to confront the world’s most pressing challenges.

The solution is governance. The AI world needs the public’s trust to achieve the benefits of AI, and it won’t get there without regulation. We must ensure the safety of the technology as it is used today, known as responsible AI, while looking to the future. More than 60 percent of Americans say they are concerned about AI’s negative impacts, according to an AI Policy Institute poll from spring 2023, but without strong laws we’ll neither prevent them nor have the tools to deal with them when they arise.

Yet right as we need public trust in AI most, it’s falling in democratic societies at an alarming rate. In a recent Luminate survey, 70 percent of British and German voters who identified as understanding AI said they were concerned about its effect on their elections. Similarly, an Axios/Morning Consult poll showed that more than half of Americans believe AI will definitely or probably affect the 2024 election outcome, while more than one-third of them expect their own confidence in the results to be decreased because of AI. More generally, two in five American workers are worried about losing their jobs to AI, according to an American Psychological Association poll, while Gallup found that 79 percent of Americans do not trust companies to self-govern their use of AI. We will never realize technology’s economic and positive benefits without addressing these concerns.

However, in 2021, analysis from PwC showed more hopeful results. In a survey of more than 90 sets of ethical AI principles from groups around the world, researchers found that all participants agreed on nine central ethical concepts, including accountability, data privacy, and human agency. Now, governments need to work together to figure out how to make these concepts a reality by building a coalition of the willing across nations that can do the hard work of planning for an uncertain future.

If we continue to simply react to technological advances without thinking ahead, there is a very real risk that we will arrive in 2050 to find that we live in a world that no longer meets our needs as humans. The European Union has thus far chosen a risk-mitigation approach, which addresses current problems but not the essential issue of how humans wish to interact with AI in the future. Individual U.S. states are enacting their own laws, which could slow innovation and make cooperation more difficult.

The world simply does not have five years to figure out its next steps.

It is guaranteed that future generations will work beside AI systems and robots. But because AI regulation has been slow to develop, we are currently relying on existing laws to drive best practices. Rather than simply attempting to mitigate harm, we should be creating best practices around what kind of AI we want in the world and how to build it. Only then will we ensure our children live in a human-focused society served by AI, rather than in an AI world occupied by humans.

By working together, democratic governments around the world—together with stakeholders from civil society, academia, and business—can create laws not to address every specific situation (which would be impossible), but instead to outline specific requirements organizations around the world must follow when developing, deploying, and using AI systems. Many who use AI have little understanding of the harmful effects that could result even when they think they are using it for good, so it is up to policymakers to codify priorities like privacy and data security. This would require AI development teams to adopt proven best practices and adhere to all existing and new legislation for creating responsible AI systems from the outset.

It is tempting to think the domestic governance gap might be filled by international regulation or treaties, but there are risks to this approach: The UN Security Council is often at an impasse even on harm-mitigation topics, much less on ones that require forward thinking. For example, despite calls from the UN secretary-general and smaller nations, we have waited, without result, since 2013 for an agreement on the control of lethal autonomous weapons. If the Security Council is unable to accomplish that kind of policy, it will likely struggle to agree to proactively design an AI policy that is suitable to all stakeholders. The United Nations is expected to name members of a high-level panel on AI, which is a welcome development, but it is unlikely that the creation of an advisory board will result in meaningful regulation as quickly as we need it. The world simply does not have five years to figure out its next steps.

But international cooperation need not run through the UN. Promising suggestions include emulating the model of the European Organization for Nuclear Research, an intergovernmental organization that includes 23 member states, or Gavi, the Vaccine Alliance. Taking that path would ensure that the Global North doesn’t have unilateral control over AI technology; it would lead to less inequality and make sure AI serves many different cultures. Governments around the world would come together to envision a positive future for their citizens with AI and create the regulations necessary to achieve it.

Governance is hard. True global governance is harder. Even this faster path will take time, requiring companies designing, developing, and using AI to self-regulate—with full support from their boards and C-suites—in the meantime. But ultimately, collaboration is necessary to build a world in which humanity benefits from AI rather than adapts to it by force. A comprehensive approach is essential, and we must act now.

Kay Firth-Butterfield is the CEO of Good Tech Advisory, the former head of artificial intelligence at the World Economic Forum, and the world’s first chief AI ethics officer.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird