AI Made Friendly HERE

AI Transparency Is the Key to Trust, But When Will We Achieve It?

One of the ways to build trust in artificial intelligence (AI) is to introduce more transparency into its decision-making processes, but this is proving to be more difficult than it sounds. Transparency is a complex notion with multiple layers and facets, and this is causing many enterprises to push it aside in the drive to create competitive advantages through digital intelligence.

Inside Artificial Intelligence

All AI models are built on algorithms, so it is possible to examine them on a granular level to see how they work and why they do what they do. In fact, it is easier to understand AI’s inner workings than those of the human mind. At the moment, however, the skills and knowledge to penetrate AI’s digital psyche reside with highly trained data scientists, which are in short supply at the moment, and command high salaries.

Numerous software platforms have also hit the channel recently, all claiming to bring transparency to AI. So far, however, none has provided the breakthrough needed to calm the fears over AI running amok and harming the very processes they are supposed to improve.

That leaves the enterprise in a tough position. Are there ways, either technical or non-technical, to move the transparency ball closer to the goal line of trustworthy AI?

What Is Transparency?

One of the first steps to take on this journey is to define exactly what we mean by transparent. Risk management specialist Holistic AI notes that transparency is an umbrella term that encompasses a range of concepts like explainable AI (XAI), interoperability, and ethics.

On a more practical level, however, transparency relies on three core competencies:

  • Explainability of technical components (namely, the internal workings of the algorithm);
  • System governance (functions like process evaluation and documentation);
  • Transparency of impact (purposes and capabilities that are open and easily communicated to stakeholders).

Each of these domains consists of numerous components, however. Technical explainability, for example, can be model-specific or model-agnostic, as well as either local or global in scope. Governance can incorporate things like accountability, regulatory requirements, policy development – even legal liability.

Transparency of impact can introduce elements ranging from data ingestion and bias to output management and intent.

Differing Viewpoints

Clearly, there are a lot of variables that go into defining transparency, which means it will most likely be implemented in various ways across the enterprise community and interpreted through a wide range of viewpoints. Recent research from Mozilla suggests that while most organizations desire transparent AI, there are few incentives to take the crucial steps needed to achieve it. In fact, issues like data-sharing, even internally, are acting as impediments, and many organizations remain largely unconcerned about the unintended consequences of their AI deployments.

Mozilla says part of the problem is that even a single model will present different transparency requirements to different people. Data scientists, architects, and others responsible for building the model don’t have the same goals, or the same informational requirements, as those responsible for deployment and management. End users, meanwhile, are operating from an entirely different viewpoint, as are regulators, auditors, and the public at large. Finding an all-encompassing solution that serves all of these needs is a tall order indeed.

Visibility Through Blockchain

At heart, AI transparency is a function of data collection and analysis, and that data must be trustworthy itself before it can be used to vouch for the model it represents. One way to do this is through blockchain, says Techopedia’s John Isige. By automatically creating a blockchain with every algorithmic transaction, the model provides data scientists, and perhaps lay users as well, with all the information needed to quickly and accurately determine how and why a particular outcome was reached.

The immutable nature of blockchain essentially provides records of all actions taken in the development of the model, providing the framework that enables finite analysis of key operations, including:

  • The model’s objectives
  • Key design elements, such as machine learning algorithms
  • The rules and guidelines used to construct the model
  • The application and reapplication of trusted, audited, and verified variables
  • Specifications of the training and testing data
  • Procedural and ethical standards
  • Evaluations for robustness and stability
  • Testing and validation checklists

With this data in hand, organizations can then move to the next phase: ensuring that their models are behaving in an ethical and responsible manner.

Ethical Dilemma

Once again, though, we run into the problem of whose ethics are we enforcing once we have tackled the transparency problem. Business leaders (and politicians) often have far different notions of what is ethical than the general public. According to Elizabeth (Bit) Meehan, a political science Ph.D. candidate at George Washington University, turning a transparent AI model into an ethical one will require input from nothing less than the full spectrum of civic, government, and business institutions, all of which must agree on at least a basic framework of rules and modes of behavior – and all of this must be carried out on a global scale.

Meehan argues that transparency rules are already present in areas like securities trading, hazardous chemical development, use and disposal, and automobile safety, but it is difficult to enforce the necessary disclosures of information to ensure bad actors are properly sanctioned. The ongoing dispute over TikTok provides a good insight into the challenge we face with AI: without fully comprehending what people want to understand about any technology, establishing transparency codes and laws will be a tough hill to climb.

The Bottom Line

In this light, it would probably be best not to think of transparency as a goal to be achieved or a target to be hit but as an ongoing process of refinement and understanding. AI has the capacity to achieve great things, but it can also go astray, just like human beings.

Delving into the math to understand why it behaves in a certain way is a start, but true transparency will also require a deep look into what we want AI to do for us and why.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird