Will The Integration of AI Introduce the New Power Player or a Risky Intruder?
Close Up Shot of a Artificial Intelligence Operating a Futuristic Robotic Arm in a Game of Chess … [+]
Gorodenkoff Productions OU
The ubiquity of artificial intelligence in enterprise and its impact on corporate governance and strategic decision-making is becoming increasingly profound. The integration of AI, particularly predictive models and large language models (LLMs), into high-level business operations comes at a time when corporate boards are increasingly urging CEOs to develop comprehensive AI strategies and implement systems that facilitate real-time, data-driven decision-making. This pressure is further amplified by investors who are demanding more rigorous and transparent models, particularly in areas like revenue forecasting where AI can provide strategic insights. While AI has demonstrated the capability to offer strategic recommendations at the C-suite level when provided with relevant data, its integration is not without challenges. The implications of AI-driven recommendations for CEOs, CFOs, and other top executives are profound, potentially reshaping the dynamics within the C-suite.
I elicited insights from Andy Byrne, CEO of Clari and Rak Garg, Partner at Bain Capital Ventures to examine how AI is transforming boardroom dynamics, enhancing transparency, and influencing C-suite responsibilities. This shift raises important questions about the balance between AI-powered insights and human judgment in high-level corporate decision-making processes and offers two perspectives on the future of AI in corporate leadership and its implications for business strategy and operations.
Strong Pressures To Integrate AI
AI has made significant strides in enterprise, with predictive AI becoming deeply embedded in decision-making processes. We are witnessing the transformative impact of AI, with the advent of Large Language Models (LLMs) and the integration of unstructured human-generated data with predictive AI that are taking capabilities to new heights. Byrne asserts this shift has changed the dynamics between corporate boards and executive management. He states, “If you look at the interface between boards and executive management, the traditional way of doing things has been to share PDFs every 90 days of backwards-looking performance metrics and hunch-based forecasts. To me, that feels like a fiduciary violation.” He likens this outdated approach to “using a rotary telephone instead of a smartphone, not to mention, increasingly archaic.”
The landscape is evolving rapidly, with investors demanding more rigorous, transparent, and real-time business models and processes. Byrne observes, “Gone are the days of markets focusing on ‘growth at all costs’. The new trend is efficient growth and operational rigor and the correlating value that investors will pay for.”
Modern boards expect comprehensive, real-time data and forward-looking indicators. Byrne explains, “They know that through Application Program Interfaces (APIs), historical tracking, predictive models, and now generative AI (GenAI) they can get visibility into specific and granular financial metrics – whether that’s data by product line, segment, geo, what have you – to fuel fast decision-making and action.”
Garg, Partner of Bain Capital Ventures agrees that AI is now a strategic imperative for large corporations, and explains companies are viewing AI not just as a cost-reduction tool, but to enhance work efficiency with existing resources. Garg notes, “Partly, larger companies are worried that if they don’t implement an AI strategy, their competitors will, and they’ll get out-competed. And partly, companies are worried that without an AI strategy, they’re leaving money and efficiency on the table.”
As a common starting point most companies will gather knowledge internally to identify a set of use cases, in the short term, that can be impacted by generative AI. As Garg states, “These initial use cases often focus on areas such as developer productivity, customer service and support, and general productivity enhancements like note-taking and meeting follow-ups.”
Employee Adoption Amidst Challenges
Among Bain Capital’s portfolio companies, Garg reports, “… 90%+ use copilots for coding, 85%+ use meeting transcription and follow up tools, and 60%+ are experimenting with generative AI for customer service and support.”
However, in a study conducted by Morning Consult on behalf of IBM, this Global Adoption Index released January 2024, which polled 8500 across 20 countries, IT professionals highlighted that 42% of IT professionals at large organizations have deployed AI, and an additional 40% are exploring it. Generative AI is particularly highlighted, with 38% of enterprises actively implementing it and 42% exploring its use. However, barriers such as limited AI skills and ethical concerns remain significant challenges.
Transparency Is The New Mandate
Executives who have to make multibillion dollar decisions, must trust outcomes from AI systems. Bryne acknowledges that decision-makers need a deep understanding of the AI’s inner workings to effectively evaluate its recommendations, explaining,
“Ensuring robust AI governance, security measures, data privacy, and legal compliance is crucial for bridging the trust gap and enabling the scalable deployment of AI within enterprises… AI can’t be a black box. In order to trust AI, executives need total visibility into how AI arrives at its conclusions and recommendations, from the underlying data to the algorithms and logic.”
He states that in conversations from C-suite executives, there is skepticism in forecasts and projects in the absence of real-time data. To validate AI outcomes, Byrne suggests that companies need to align the AI’s recommendations with their organizational goals and values.
Balancing Innovation with Risk Management and Upskilling
If this implementation of AI within operations is imminent, clear policies and frameworks to address critical issues of responsibility, governance, and auditing of AI systems and their recommendations need to be established.
Garg acknowledges the need for a comprehensive approach to risk mitigation that considers the interests of customers, partners, and employees, stating, “I completely agree. It’s important for companies to understand not just the technology, but also the potential risks and tradeoffs associated with LLMs, and define a mitigation plan for those risks.”
Garg, whose had extensive experience identity and security, outlines several key considerations:
1. Risk assessment: “The engineering organization’s first step is to work with the security and compliance team in mapping out and understanding the risks that are present within the company. If you don’t know your risk, you won’t know how to mitigate it when the time comes.”
2. Robust evaluation: “Invest in robust evaluation, testing, and adversarial testing. LLMs from every major provider have been proven to have a non-zero likelihood of divulging sensitive or unsafe material when prompted the right way. Extend the zero trust principles to AI and believe that every user is malicious. How would you test and eval your AI apps in that world? “
3. Explainability: “Clearly document data sources, model architectures, and processes that are in place to facilitate accountability.”
4. Communication: “Communicate with customers, partners, and stakeholders. Don’t pretend a system is bulletproof if it isn’t. Clear communications of the risks and intended behaviors can go a long way in upholding reputations.”
Garg suggests that this overhaul should encompass not only technological aspects but also organizational culture, emphasizing the importance of upskilling and training employees to work effectively with AI systems. He adds, “LLMs are non-deterministic. That means the same input or action could yield a diverse range of probable outputs. There are now AI engineers and AI product managers who are especially adept at prompting and prompt-tuning these LLMs, getting them to be performant when they are quite slow out of the box, and crafting experiences around them.”
Furthermore, he highlights the need for new roles to address emerging challenges: “At the same time, new risks and governance measures require AI data curators and cleaners, and training experts to make sure nothing biased gets into the model.”
The Future: The AI-Augmented C-Suite
As AI becomes increasingly integrated into corporate decision-making processes, will AI recommendations be viewed as valuable inputs or final decisions? Byrne and Garg both stress the continued reliance on human judgment, experience, and leadership in evaluating and interpreting AI-generated insights within the broader context of organizational strategy and competitive landscape.
Byrne envisions a future where AI enhances, rather than replaces, executive decision-making, illustrating: “Imagine a company leadership team debating whether to build a new product, or acquire the capabilities it needs. AI can analyze all of the relevant data, the company’s cash balance, its human capital and skill sets, and weigh all of the trade-offs, equipping leadership to make a much more sound decision.”
For executives to interpret and effectively evaluate AI recommendations, Garg emphasizes the importance of developing new competencies. He states, “C-suite executions additionally need to develop critical thinking skills around AI… the types data that exist within the company, and how to leverage that data for best use.”
He adds that these recommendations must align with the organization’s strategic goals, ensuring they address ethical, legal and regulatory implications.
Garg concludes with a powerful insight on the synergy between AI and human judgment: “AI has solved the blank-sheet problem, which is that anytime we have a decision to make, we can reasonably get a starter set of suggestions from an AI now. But it’s still up to us to use our judgment, ethics, and values, to turn the suggestions into something we are proud to put in front of employees, customers, and partners.”