AI Made Friendly HERE

UN Unanimously Adopts First Ever Global Resolution on AI Safety

  • In a landmark decision, the UN adopts the first-ever global resolution to protect citizens worldwide and their rights from the risks of AI.
  • Although the resolution initially faced resistance from Russia, China, and Cuba, ultimately all 193 members of the UN voted in support of it.

The United Nations General Assembly approved the first-ever global resolution on artificial intelligence on Thursday. The resolution recognizes the danger of improper development and use of AI and encourages nations worldwide to protect their citizens and their basic human rights.

It was first introduced by the US and co-sponsored by China and over 120 other nations.

Today, all 193 members of the United Nations General Assembly have spoken in one voice, and together, chosen to govern artificial intelligence rather than let it govern us.Linda Thomas-Greenfield, U.S. Ambassador to the United Nations

The Conflict Between the US And Its Foreign Adversaries Over The Resolution

Although the resolution has been crafted for the greater good of society, not everyone was 100% onboard with the original draft.

According to reports, it took the US about months of negotiating and 40+ sessions to get everyone to vote in its favor. A lot of “heated discussions” were also involved, according to an anonymous source.

Most of the objections came from Russia, China, and Cuba who only consented to it after their suggested edits were incorporated. Two major reasons can be cited for this conflict.

  • The resolution established the US as a pioneering leader in the development of AI despite the fact that the EU and other states are far ahead in terms of AI legislation.
  • The Biden administration has been trying to increase its influence on intergovernmental bodies for a long time now, much to the displeasure of some nations.

These two factors combined might have rubbed the US’s adversaries the wrong way. However, now that the changes have been added to the proposal, it looks like everyone (including China, Russia, and Cuba) is happy to go ahead with it.

Read more: U.S. Government forms groundbreaking AI safety consortium to address mounting risks

Problems With The Resolution 

It’s great that the nations are recognizing the risks associated with the growing influence of AI and taking necessary steps to curb it. However, just like most of the steps taken in the past, this resolution seems to be toothless too. It’s non-binding and doesn’t come with any penalty for the offenders.

Similarly, in November last year, the US, China, the EU, and 25 other countries signed the historic Bletchley Declaration that aimed to reduce the risk of AI while continuing to develop it. This resolution faced a lot of criticism for not including developing nations.

In short, despite several attempts from different nations, there’s still no single solid framework of AI rules that will apply globally and penalize the non-compliers.

But the good news is that apart from these international agreements, many nations are independently taking action to safely manage the AI boom. Here are a couple of them.

  • President Joe Biden recently signed an executive order that makes it mandatory for large AI companies to share their safety tests and other important details with the government for evaluation.
  • It also introduced several new privacy guardrails for user data and addressed other concerns like content verification, intellectual property rights, and civil rights.

  • Similarly, the European Parliament this month approved Europe’s AI Act which touches on similar subjects. It aims to safely develop AI while protecting the fundamental rights of people and businesses.

While all these initiatives are a positive step towards AI regulation, it won’t be much help unless there’s a strong penal framework. It’s no secret that many countries are using AI to attack their adversaries through state-backed hacker groups. Without penalty, AI can easily turn into the next mass destruction weapon.

Read more: UK’s AI safety strategy – a hollow promise?

Originally Appeared Here

You May Also Like

About the Author:

Early Bird