AI Made Friendly HERE

Artificial Intelligence – The CPA Journal

Interest in artificial intelligence (AI) has surged since ChatGPT’s public release. ChatGPT responded to my “What is ChatGPT?” query by proclaiming: “It can answer questions, provide explanations, generate creative content, and engage in dialogues, making it a versatile tool for various language-related tasks.” In other words, ChatGPT and similar tools have, in some ways, democratized AI by bringing its capabilities to the masses. This potential has already captured the attention of executive management and boards of directors, and has fueled their imagination for new business models and sources of profit. Technology vendors have also noted and infused their offerings and marketing campaigns with AI or, in some cases, rebranding their offerings as leveraging the technology. Organizations face macro (significant environmental and societal) issues, usually outside the organization’s control, and micro, typically issues the organization can manage.

“If things seem under control, you are just not going fast enough,” is a quote often attributed to race car legend Mario Andretti (https://tinyurl.com/5y5ztzza). With organizations still challenged by the impact of recent emerging tools and technologies, including cloud computing, blockchain, data analytics, and the Internet of things, trying to take advantage of AI is pushing many of them to lose control. The unclarity is compounded by the scope of AI (including disagreement as to what exactly constitutes AI) and the current realizable benefits. A recent Harvard Business Review article, “The AI Hype Cycle is Distracting Companies” (https://tinyurl.com/2ee3epbe) warned about the need to resist “the temptation to ride hype waves and refrain from passively affirming starry-eyed decision makers who appear to be bowing at the altar of an all-capable AI.” The article also discussed how buyers and users of the new technology were distracted by the marketing hype from what is available today and in the near term.

Taking Notice of Evolving Risks

The recent hype has reconfirmed the need for risk managers to help organizations govern their AI initiatives and align investments with appropriate needs and organizational objectives. With related emerging technologies such as robotic process automation (RPA) and the use of predictive analytics capturing the profession’s attention, The CPA Journal published, “Meeting the Challenge of AI What CPAs Need to Know” (https://www.nysscpa.org/190703-pl). The article’s authors reviewed the history of AI, examples of AI in daily life, technologies involved and their differences [expert systems and machine language (ML)], AI projects in CPA firms, and recommendations to the profession on how to embrace AI.

The reaction of the accounting profession was initially mixed, as the early AI applications focused on operations and efficient business process practices rather than pure financial reporting, although numerous efforts were made to enhance fraud detection and assist with decision making. Many smaller organizations could not justify the cost-benefit considerations of implementing AI, and their lower volumes further challenged the benefits and practicalities of using AI’s sophisticated algorithms. When adopting AI or its derivative technologies, governance concerns focused on traditional accounting risks that could be managed using classical IT controls and audit techniques focused on data center (general) and automated processing (application) rules. Accuracy, integrity, and reliability of results were of primary importance to governance advisors. The widespread adoption of Robotic Process Automation (RPA), while technically not considered AI by some, provided a glimpse of the future of the finance function and how realistic, streamlined processes could have a current and dramatic impact on organization efficiencies.

Adoption of RPA Calls for Guidance

Most major accounting-related associations and consulting firms leveraged experiences with RPA to envision how AI would impact financial-related functions. In many aspects, RPA introduced what AI could look like to many financial functions. In its “Reinventing Finance for a Digital World: The Future of Finance (https://tinyurl.com/yc4pwsch),” the AICPA recognized that “AI, and the underlying algorithms that are accelerating its development, is changing the way organizations engage with their customers,” and that “traditional management approaches are not effective in the zone of complexity. Here, finance professionals need high levels of creativity and innovation. They must embrace technological possibilities to create new modes of operating.” In other words, the governance structures used in the past, including staff education, would need to be adjusted to reflect the higher value demanded by stakeholders due to the technology available to perform the more mundane tasks of professional-level services. Professional accounting organizations, including the AICPA and CPA Canada (“A CPA’s Introduction to AI: From Algorithms to Deep Learning, What You Need to Know,” https://tinyurl.com/39e9skkm), and the Institute of Chartered Accountants of England and Wales (ICAEW) (“AI and the Future of Accountancy,” https://tinyurl.com/mrxju79k), published white papers to help their members adapt and to provide foundations of needed governance practices.

The ability of automation to replace human thought and actions has raised ethical concerns. Although ethics has always been a critical component of governance programs, a Chartered Professional Accountants of Canada white paper (“Building Ethical AI solutions: Using the Ethics Funnel and a Trusted Framework,” https://tinyurl.com/2m2wdcka), examined the need to protect the public interest when machines, perhaps representing professionals, were now making decisions rather than the accountable professionals themselves. Leveraging recommendations provided by a Trusted AI Framework created by Ernst & Young (https://tinyurl.com/eh982cp9), the publication identified five governance attributes focusing on performance, transparency, explainability, bias, resiliency, and performance. These considerations were critical as society focused on social justice issues. Governance considerations also need to consider the impact of AI on social media.

Around the same time, consulting firms provided white papers and guidebooks to help clients identify best practices. Generally, ideas were leveraged from existing governance thinking but used AI-related terms to adapt to the emerging environment. Classic concerns included ensuring executive management support, aligning projects with business objectives, cybersecurity threat management, resiliency, data privacy, regulatory compliance, and more. Much of the practical experience shared was based on applications such as robotic process automation (RPA) that did not provide exposure to the more advanced AI concerns. The Information Systems Audit and Control Association (ISACA) developed various educational tools and a certificate program. ISACA published one of the first peer-reviewed audit programs specifically related to governance.

The Institute of Internal Auditors introduced a framework to provide internal auditors with the needed guidance to audit AI (“AI—Considerations for the Profession of Internal Auditing”) and began revising it in 2023 (“The AI Revolution Part I: Understanding, Adopting, and Adapting to AI,” https://tinyurl.com/y3meprxk). The framework was unique in that it provided a reliable view of AI issues that were most relevant to the governance and audit communities. A key governance area of focus in Part 2 of the update was data governance concerns such as the following: “because generative AI systems are trained on specific information, it’s much easier to introduce not only errors but also bias early on in their development if they are not trained on reliable data.”

Post-COVID Rapidly Changing AI Applications, Geopolitical, and Cybersecurity Footprints

Many believe that the onset of COVID brought significant technological developments and adaptations that significantly changed how organizations and their stakeholders interact. AI, evidenced by the introduction of generative AI technology such as ChatGPT, is no exception. Although this has opened the door to new business and creative opportunities, it has also significantly increased the opportunities for exploiting risks. Given the evolution of geopolitical, social, and cybersecurity threats, and the ability to further streamline and integrate business-to-business processes, nations and organizations are increasingly raising their concerns about AI’s role in everyday life. More sophisticated fraud and deceptive activities can also increase their probability of success. Public concerns over social justice issues also raise concerns that machines do not duplicate human fallacies and misconceptions—for example, the potential loss of privacy concerns almost all users.

Given the changing landscape, agencies, and think tanks reconsidered their published risk guidance and adapted guidance to the new threats. Representative concerns about these studies were expressed in the International Monetary Fund’s (IMF) “Generative AI in Finance: Risk Considerations” (https://tinyurl.com/4hjs9j7b). Focusing on AI generative developments, it updated a previous IMF publication on the topic and included the following:

  • ▪ Privacy-related data leakages from the training data sets were covered.
  • ▪ Embedded bias could emerge if the data used to train the system are incomplete or unrepresentative, or prevailing societal prejudices underpin the data.
  • ▪ The accuracy of AI models’ output was addressed, particularly in a changing environment. It also covers the governance of the development and operation of AI systems to safeguard against unethical use, including exclusionary, biased, and harmful outcomes.
  • ▪ The use of synthetic data includes the potential for replicating inherent real-world biases and gaps in the generated data sets.
  • ▪ The generation of more sophisticated cybersecurity attacks was noted.

Given the complex technological threats that exist, risk managers should also monitor the National Institute of Standards and Technology’s AI Risk Management Framework (NIST RMF). Per the NIST, the RMF’s purpose “is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI system” (https://tinyurl.com/mwfzndb5). The framework provides a detailed walkthrough and covers governance, risk assessment, and cybersecurity considerations.

Dramatic and Rapid Changes

In many ways, AI risk management is like other technology governance efforts–except it will entail more dramatic and rapid changes. Guidance continues to evolve, with some organizations relying extensively on conflicted vendors to guide them. Neutral frameworks, such as the NIST RMF, and governmental-related agencies, such as the IMF, will continue to be the primary source of independent guidance. Consulting firm publications will also continue to be the primary source of keeping up to date. Unfortunately, risk managers will need to spend increasing effort to adequately manage the opportunities provided by AI.

Joel Lanz, CPA, CISA, CISM, CISSP, CFE, is a visiting assistant professor at SUNY–Old Westbury and provides infosec management and IT audit services through Joel Lanz, CPA, P.C., Jericho, N.Y. He is a member of The CPA Journal Editorial Advisory Board.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird