AI Made Friendly HERE

AI Audit’s Objects, Credentialing, and the Race-to-the-Bottom: Three AI Auditing Challenges and A Path Forward

Anton Grabolle / Better Images of AI / AI Architecture / CC-BY 4.0

The contours of artificial intelligence (AI) system’s risk regulations are becoming clearer. A consistent theme in many countries’ regulations is some role for AI audits, with the EU’s AI Act, Canada’s proposed AI and Data Act, and the United States Executive Order on the “Safe, Secure, and Trustworthy Development and Use of AI, ”including language to this effect. Private and non-profit auditing firms are also offering or preparing to offer varied AI audit services (such as training and credentialing AI auditors and performing AI ethics or data audits) to meet this growing demand for AI system and organization audits.

However, efforts to meet AI audit service demands, and by extension, any use of audits by public regulators, face three important challenges. First, it remains unclear what the audit object(s) will be – the exact thing that gets audited. Second, despite efforts to build training and credentialing for AI auditors, a sufficient supply of capable AI auditors is lagging. And third, unless markets have clear regulations around auditing, AI audits could suffer from a race to the bottom in audit quality. We detail these challenges and present a few ways forward for policymakers, civil society, and industry players as the AI audit field matures.

AI Auditing’s Three Challenges

Unclear Object and Assessment Methods for AI Audits

There are several possible AI audit objects. Early in an AI system’s development lifecycle, audits could focus on ethical and risk questions arising from the problems AI systems were devised to solve and the types of data used. Model development and testing procedures offer an additional audit object that could be audited against later in the development lifecycle. Alternatively, the AI system developers or managers could be treated as the audit object, with audits determining adherence with international management systems standards, such as ISO 42001, on the basis that AI systems can be quite different across industries and use cases which can make technology or performance standards impractical.

A deciding factor for choosing the proper AI audit object should be a clear link between the use of the audited AI system and the associated risks. A common challenge for any of these audit objects is determining when the AI audit object is “good enough” to pass the audit. Does it pass at 50%, 80%, 81%, or some other threshold against some expectations of potential impacts or harms?

Lack of Qualified and Credentialed AI Auditors

Individuals with the required competencies to conduct AI audits are in high demand. Private and public organizations are seeking employees with diverse AI skill sets, but the AI audit talent pool is far less than the current demand for AI audits. Moreover, AI systems themselves are in the process of transforming audit practices. Audit professionals are thus facing two simultaneous pressures. They need to adapt their own audit practices to account for and leverage the power of AI and pressure to build competencies in the emerging AI audit field. Building these competencies takes time. In the meantime, the competition for qualified and credentialed AI auditors can contribute to AI audit services’ increased costs and durations. This issue is also compounded by the unclear AI audit object and assessment methods. We, therefore, cannot assume that audit capacity will exist to conduct regulated or voluntary audits envisioned in current and future policies.

Race-to-the-Bottom for AI Auditor Oversight, Training, and Credentialing

Industry associations and standard-setting and audit organizations play a large role in credentialing and ongoing evaluation of the competency of their industry professionals against various bodies of knowledge. For AI, the International Association of Privacy Professionals recently released AI Governance Professional Certification,and the International Algorithmic Auditors Association launched an AI and Algorithm Auditor Certificate Program. Both certification programs include AI audit training. There is a good chance that more such certification programs will form, creating competition to oversee, train, and credential AI auditors. Given profit incentives at play, this competition can create a race-to-the-bottom, where the quality of oversight, training, and credentialing may be reduced to provide these services at a lower price and faster pace.

A Path Forward

A coordinated effort by policymakers, industry, and civil society organizations can help determine different approaches to AI audit object(s). This effort could better operationalize various AI rules into measures that target specific risks and determine acceptable thresholds. For example, countless “bias tests” exist to measure AI systems and provide recommendations on how to reduce bias. But it is unclear which of these tests are better than others for certain AI systems or use cases. Work is being done to address these challenges. The Data Nutrition Label project scores datasets against various criteria to reveal and reduce their bias, and Responsible AI Institute’s Responsible AI Certification Program’s assessment includes AI system bias as a central dimension. Both efforts move closer to judging the most appropriate tests for specific risks.

To address the lack of qualified and credentialed AI auditors, policymakers can allocate funding to training AI auditors or offer grants to organizations developing training programs, like the Tech Ethics Lab’s 2023 funding of projects focused on AI audits. Auditing and non-auditing firms can also train their existing auditors or personnel to conduct AI audits. However, these initiatives are dependent on creating standards and rules on AI audits and assessment methods.

Finally, staving off the race to the bottom will require action by policymakers to set minimum training and credentialing requirements, thereby creating a “floor” for AI auditor competence. For example, Canadian policymakers took a similar approach to develop principles for the Personal Information Protection and Electronic Documents Act (PIPEDA), that set a minimum expectation for privacy auditor training and what criteria or objects the audit should review. At a minimum, policymakers can require that organizations complete assessments without including language on what that assessment should contain, such as the PIPEDA case above, or the EU AI Act’s required conformity assessments of high-risk AI systems without detailing what those assessments must contain. This would ensure assessments are done but allow industry flexibility over the specific means and ends of those AI assessments.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird