AI Made Friendly HERE

Sustainable AI must feature in banks’ code of ethics, says report

  • AI’s unchallenged and unregulated growth is raising concerns about its social, ethical and sustainability impact
  • EU and international companies still do not reference AI within their code of ethics
  • Companies need to look at the guidelines already in place to develop sustainable AI policies

The world is witnessing the unprecedented growth of artificial intelligence (AI). But as the technology continues to evolve at a rapid pace and infiltrates new market segments unchallenged and unregulated, concerns are being raised about the social, ethical and sustainability impacts of its unsupervised use. 

These concerns have only been exacerbated by recent events, such as the temporary ousting of OpenAI’s CEO Sam Altman by his company’s board, allegedly due to concerns about his pursuit of more advanced forms of AI and its potential implications for humanity. 

‘Unmasking Sustainability in AI’, a report recently published by London-based rating agency, Standard Ethics, looked at AI governance at some of the largest listed companies. The report states that many still treat AI as just another technology, rather than developing specific guidelines or policies that address its ethical, ESG and sustainability impact.

“AI is not just a technological tool that can replace human intelligence, but an instrument that can cause ethical and sustainable issues,” says Beatrice Gornati, vice-president of the research office at Standard Ethics. “The regulation of AI is a sustainable issue, and we believe that the technology should be included among the main topics covered within [a company’s] code of ethics.” 

Standard Ethics’ research assessed AI governance at 240 of the largest listed companies inside and outside the EU, based on four key criteria:

  • Whether AI was acknowledged as having an ethical (and ESG) impact, and is therefore referred to within codes of ethics; 
  • Whether companies felt the need to publish a policy on AI in order to be accountable to stakeholders for their practices; 
  • Whether companies disclose secondary documents on the subject, even if less comprehensive than a specific policy; 
  • Whether any policies dedicated to AI are clearly aligned with international guidelines.

AI is more than just another technological instrument

Although data biases or gaps in AI models can have potentially harmful outcomes or consequences, particularly for underrepresented or marginalised groups in society, Standard Ethics found that none of the EU or non-EU companies it surveyed referenced AI within their code of ethics or code of conduct.

Only 9% of companies based in the EU published policies on AI, with the majority of those companies belonging to the banking sector. With regards to companies outside the EU, none had an AI policy. 

In terms of “more generic documents”, as opposed to specific AI policies, 64% of companies in the EU published documents on AI, compared to 55% of non-EU companies. At an industry level within the EU, 89% of companies working in finance had generic documents, compared to 88% in health, 86% in technology, and 70% in utilities. A similar trend was observed at the sector level for non-EU companies. 

Despite the report’s findings, Ms Gornati says companies are still making important progress in the right direction by acting with caution when using AI. She says banks and financial institutions are among the firms leading the way in the quest for more sustainable AI usage. “Our studies have found that the finance sector is more conscious of the sustainable implications of AI which, on the one hand, can help banks manage risks, but it can also have an impact on their relationship with clients,” she says.

“Within the banking sector, if artificial intelligence does not include human control at the end of the process, it can generate a number of issues including lack of transparency in investment decisions and increase in market volatility,” says Ms Gornati. 

Incorporating sustainable AI policies

According to Standard Ethics’ overview, in order for a company’s AI policy to be considered compliant with international strategic objectives, it needs to align with guidelines issued by reputable sources, such as the United Nations, or the Organisation for Economic Co-operation and Development. 

Ms Gornati says that the best strategy for financial institutions and other companies looking to develop sustainable AI policies is to start by looking at the guidelines already in place, which provide a better understanding of the technology.

A guide published by the Corporate Governance Institute highlights some of the steps that firms can take to improve their AI policies, which include establishing a working group to oversee policy creation and familiarising board members with AI concepts. 

However, some firms still don’t fully grasp the magnitude of AI, says Ms Gonati. Therefore, it is paramount for companies to follow international standards to understand the technology better. “It is important to be prepared because there is a real misconception of what AI is, and companies may think that they don’t need to add it to their policies, or they link it to cyber security; however, AI is something much larger and complex than that,” she adds.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird