AI Made Friendly HERE

The future of AI is responsible AI

Artificial intelligence (AI) is no longer optional, as evidenced by the steady growth of investments in AI in ASEAN, with Singapore being the region’s standout with a US$68 (S$90.4) per capita investment. 

As AI continues to get smarter, businesses and organisations will have to invest in developing systems that can solidify the technology’s trustworthiness, transparency and accountability, said experts at IBM TechFest. 

Held at Suntec Singapore Convention and Exhibition Centre last month, IBM TechFest featured interactive showcases on the impact of technological advances such as AI, automation, cyber security and quantum computing. Participants also attended seminars with futurists, analysts and innovators, and met with developers from IBM Research and Labs teams across the globe.

Throughout the three-day event, a key theme surrounding AI emerged: With the whirlwind speed of advancement in AI, how can we better harness and use it with peace of mind?  

RESPONSIBLE DATA COLLECTION, USAGE AND SHARING

Data is the foundation for AI, and organisations will realise the value of AI only when high-quality data is accessible and usable for stakeholders.

Traditionally, data scientists who input data into algorithms and then decide how the data will be used may not consider the fairness of AI decisions. A prejudiced dataset could lead to bias against certain groups and unintended outcomes – for example, women could end up being excluded from loans and other financial services. 

“AI is often viewed as a black box, producing results without understanding the underlying process. Responsible AI aims to increase transparency and accountability,” said Dr Li Xuchun, the Monetary Authority of Singapore’s (MAS) deputy director and head of AI Development Office.

According to Mr Tang Ming Fai, chief information officer and chief data officer at Temasek Polytechnic, improving the harmonisation of data – unifying separate data formats under one dataset – from different sources is critical. 

“Multiple platforms and tools are available to tap into different data sources, but definitions used by different organisations make it tricky to harmonise data. It is a misconception that combining data from similar organisations will always lead to better models. Data must be harmonised first to ensure compatibility,” he said.

Besides equitable data collection, responsible data sharing is also key in building trust in AI. The collection, usage and distribution of data today require different levels of compliance to protect privacy and prevent misuse. Yet, less than two-thirds of chief data officers in ASEAN said that they were compliant with data legislation and standards in a recent global report.

More organisations are also recognising the importance of data security. Earlier this year, the Singapore Land Authority, for example, appointed Indian multinational information technology consulting company Tech Mahindra – an IBM strategic business partner – to develop an online platform to secure e-payment and digitised documents and signatories for property conveyancing processes for all types of properties.

“The agency that owns the dataset must act as an arbitrator to organise discussions. Misinterpretation of data will otherwise be the result, leading to incorrect conclusions,” added Mr Kitman Cheung, APAC technical sales director at IBM Software Group.

BUILDING TRUST, TRANSPARENCY AND ACCOUNTABILITY

Four years ago, MAS introduced the FEAT – Fairness, Ethics, Accountability and Transparency – principles and the Veritas project to guide the financial sector in using AI responsibly. 

“The Veritas initiative aims to develop a concrete methodology, defining fairness, transparency, and internal accountability, with the help from tech partners, including IBM,” said Dr Li. “The second goal of Veritas is to build an open-source toolkit for the industry to implement this methodology.”

A report, co-authored by the World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University, details how IBM is doing its part to advance ethical AI technology, from establishing an AI ethics board to developing a set of principles and pillars that includes AI explainability and tools to test the reliability of AI predictions.

With AI drawing the attention of legislators worldwide, explainable results are crucial when it comes to justifying the performance of AI algorithms and models. Customers also deserve accountability from organisations for analytics-based decisions. As such, organisations today will require proper governance baked into their AI strategy.

INVESTING IN RESPONSIBLE AI 

To promote public trust in AI and minimise potentially negative or harmful consequences arising from its use, companies need to prioritise investing in AI governance. That includes creating policies, assigning decision rights and ensuring organisational accountability for risks and investment decisions. 

For a start, Dr Li urged organisations to keep responsible AI at their core. “Educate board members and senior management about the benefits of AI and responsible AI,” he recommended. “Build talent by partnering with academia. Train and equip employees with the necessary AI and data skills. Ensure that AI is not a standalone solution as it requires an understanding of the business needs and customer demands. And focus on creating business value and revenue as the end goal,” he recommended. 

Developing trustworthy AI may be a long and bumpy road, but its importance cannot be understated. Embracing responsible AI frameworks will not only help organisations build stronger infrastructure, but also improve their business reputation and competitiveness in the long run.  

Learn more about trustworthy AI and how IBM is working to ensure that AI systems are fair, robust, explainable and accountable.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird