AI Made Friendly HERE

Is AI decision-making too much of a leap of faith?

The Institute for Fiscal Studies (IFS) Tax Law Review Committee recently published a paper on the use of artificial intelligence (AI) in automated decision-making (ADM) in tax administration. If you think that sounds a bit niche, I would disagree: it is something we need to be very aware of because it will impact on events that every practitioner deals with routinely. The report is well worth some of your time. And a declaration of interest: I am a member of the Tax Law Review Committee.

Fact or judgment

HMRC’s powers are broadly split into those which are administrative or mechanical in nature and those which require the exercise of an element of discretion or subjectivity. An example of the first category would be the power in section 8 Taxes Management Act 1970 to track whether a person who has been provided with a notice to file a personal tax return has done so (a matter of fact). An example of the second would be the power in section 9A to decide to open an inquiry into a personal tax return that has been filed (a decision based on judgment). The distinction extends to penalties, where again some are fixed and some require a degree of discretion. 

The report defines ADM as “any decision or process where the whole or part of the decision or process is made without human intervention (through technology), irrespective of whether the decision or output is subsequently reviewed by HMRC”. The technology in question is, broadly, either AI or algorithmic/rules-based systems. The report’s author, Kunal Nathwani, believes that while AI has not yet been widely deployed in ADM by HMRC, it is inevitable that it will play a more prominent role in the future. 

Having heard Kunal speak on the subject at the recent Chartered Institute of Taxation (CIOT) residential conference in Cambridge, I think he is right. The attraction of deploying technology to replace human beings will be irresistible, as will be the potential that AI holds for identifying potential non-compliance.

Employing AI in ADM would represent a fundamental shift from HMRC officers being the primary decision-makers, to technology becoming the decision-maker.  This raises some serious questions.

Current processes

HMRC currently uses conventional algorithmic systems in some of its processes, for example, to make penalty determinations under Schedule 55 Finance Act 2009. The technology suits cases where there is no element of discretion (the administrative or mechanical category). There is however no published list of all of the processes for which HMRC uses such technology.

When AI is used to make automated decisions, it will make its own interpretation of the data it is presented with. This is very much in the second category of decision-making process. The AI should therefore be trained on large, diverse, reliable, unbiased sources and should represent a wide cross-section of the demographic affected. Safeguards should apply both during the development phase and after deployment. 

 

The report suggests that legal safeguards should be in place to ensure that taxpayers are notified when AI is used for ADM in relation to decisions having a direct impact on them and that taxpayers are provided with explanations for the rationale behind those decisions. AI should be transparent and explainable. It is also suggested that HMRC should be bound by guidance delivered to taxpayers by large language models (LLMs).

Currently HMRC’s main use of AI is believed to be in compliance risk, for example in VAT fraud, by detecting patterns of VAT fraud from data taken from VAT returns. One of HMRC’s most powerful compliance risk tools is Connect, which was first deployed in 2010. 

Although HMRC has been reluctant to publish much information about Connect, it is reported to have access to some 55bn items of data through access to website browsing records, email records, social media, flight sales and passenger data, DVLA records, tax returns, Land Registry records online property rental platforms and the UK Border Agency. With HMRC gaining access to increasing amounts of information from online platforms and through international Automatic Exchange of Information agreements, Connect is likely to become an ever-more effective weapon in HMRC’s arsenal.  

Safety first 

Any shift to ADM must be very carefully thought through. It must, the IFS report suggests, be the result of a conscious and transparent policy decision by government. Legislation must both affirmatively provide for the use of AI in ADM and specify when its use is impermissible. 

Taxpayers must be provided with appropriate and legally enforceable safeguards. Legal safeguards should, inter alia, specify how taxpayer risk levels will be determined – at least to some extent – following the processing of data in risk management systems such as Connect. The report proposes that these safeguards should either be in the form of tax-specific legislation or an HMRC AI charter.  

In its latest annual report, HMRC says that it has established an AI assurance process, AI ethics framework and governance to ensure the “safe, effective and responsible” use of AI models. HMRC’s AI Ethics Working Group is responsible for establishing mandatory processes, challenging projects and reporting on progress across HMRC so that “where we use AI in a way that could impact customer outcomes, we always ensure that the result is explainable, that there’s a human in the loop, and that it complies with our data protection, security and AI ethics standards”.

Safe and ethical

It is reassuring to see HMRC’s clear commitment to safe and ethical deployment of AI in the annual report and this point was emphasised by the HMRC representative on the AI panel discussion (alongside Nathwani) at the CIOT Cambridge conference. 

With the overwhelming majority of tax authorities internationally now either deploying or looking to deploy AI in tax administration, we need to ensure that we remain very focused on how that deployment happens and that alongside its deployment we see robust taxpayer safeguards.

My conclusion as chair of the AI panel session at the conference was that this felt like boiling frog syndrome: the water might feel lukewarm at the moment but we can’t wait for it to get near boiling point before seeking safety. One delegate disagreed and said he thought the temperature was already way past lukewarm. On reflection, I think he was right.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird