AI Made Friendly HERE

6 steps to become an AI-enabled organization

The initial promise of AI was focused on cost-cutting and productivity through incremental automation. Tools like robotic process automation or generative AI for content creation allowed large enterprises to drive efficiency in narrow applications, often stapled onto legacy processes, leading to modest improvements without fundamental change.

Indeed, according to our global survey of nearly 300 participants, 45% of companies with 1,000 employees or more were only experimenting with AI, as of September. Meanwhile, only 15% of organizations were operationalizing and scaling AI across the business.

In contrast, AI-native organizations like Moderna and ServiceNow have experienced bigger wins because they were built as digital-first companies. More importantly, their strategies, culture and infrastructure centered around continuous learning from the outset, allowing them to flex AI to their advantage and avoid the sunk costs of archaic processes or tech.

Organizations that came of age before the digital era or that grew out of more traditional foundations must evolve from using AI to squeeze capacity out of every role to an AI-enabled culture that stays on top of technological innovations in order to get some—but not all—things done better and faster. If not, they will rapidly lose market share as their workflows and processes petrify. To level the playing field, these organizations need to transition from what we call bolt-on solutioning, or applying AI to existing structures processes, to become AI-enabled, by using AI as a process enabler that frees up resources to a degree that is visible on profit and loss statements.

The AI implementation landscape

While legacy organizations lag behind AI-native ones, the number of organizations researching AI (25%), prohibiting its use for work (4%) or ignoring it (3%) — meaning they’ve communicated no specific use policy to their workforce —  is rapidly declining. The Institute for Corporate Productivity (i4cp) data show the number of survey participants who say their organizations prohibit the use of AI has been halved, and the percentage reporting that their organizations have no AI strategic plan is seven times smaller today than it was two years ago. Many of the organizations that instituted generative AI abstinence two years ago are now doing an about-face and rapidly trying to convince their workforce that forthcoming AI tools they were taught to fear can be embraced.

And yet, few companies are measuring the impact of gen AI investments on the workforce. Our research shows 65% of organizations either aren’t measuring the impact of gen AI adoption on the workforce or don’t know if they’re measuring it. For organizations that are tracking impact, they’re primarily doing so by measuring perceived productivity (51%), tracking adoption rates of AI tools (46%) and frequency of use (44%).

Despite a lack of measuring AI impact, what unites AI-native companies is their willingness to continually transform their operations along with their tech stack to suit evolving objectives. Like a modular office, they can rearrange nimbly to suit their business strategy. Rather than scrambling to catch up, they have the ability to make timely and effective change to enhance their performance.

These companies don’t add AI to the margins; rather, they use it to shape core decisions, customer experiences and business growth strategies. Take Biotech firm Moderna, for example. It embraced AI across its entire value chain, from drug design to manufacturing, by partnering early with OpenAI and creating GPTs. The company now boasts more than 3,000 of these, which automate tasks ranging from dosage calculations to answering employee HR questions.

ServiceNow, which originally focused on IT service management and basic workflow automation, has transformed into an enterprise AI platform by embedding intelligent automation and AI across multiple business functions. This shift resulted in significant business growth, with expanded addressable markets and high customer retention rates.
Of course, breakthrough innovation is no guarantee of long-term success. Just look at the fates of RadioShack, Friendster or Xerox. Many AI-native companies risk becoming laggards and joining the ranks of legacy organizations struggling under the weight of entrenched systems and outdated cultures.

But our research shows a pragmatic and powerful path for legacy companies lies not in attempting to become AI-native, but in evolving toward AI enablement—a strategic middle ground that balances transformation with operational reality.

See also: 3 lessons HR can learn from early automation adopters

A phased approach to AI enablement

Short of starting over, legacy companies need to find a way to stay competitive. As organizations increasingly implement enterprise-wide LLMs such as Microsoft Copilot or Google’s Gemini, many of these organizations are stuck in an experimental phase. In our interviews with HR executives, we hear the term “piloting Copilot” frequently. And while employees are using these tools for one-offs to do the same work more rapidly or effectively, best practices aren’t scaling.

Let’s say there’s an individual in the sales department who discovered the best way to use AI to automate a sales process. As a result, he receives higher customer engagement scores than his peers and spends the time saved at the gym. He’s embarrassed to share that he uses AI to do this and therefore, the benefits of the model —better performance and time savings— are limited to him in the form of bigger commissions and better personal fitness.

If he shares his method with his peers, the aggregated improvement in performance begins to realize real revenue gains for the enterprise. Allocate the time saved to creating new value for the organization, and the return on the organization’s AI investment will be larger and grow more rapidly. He would also improve the performance of his peers and eliminate his competitive advantage. But the context of traditional performance management and compensation approaches incentivizes him to keep his methods to himself.

This is happening in many organizations taking a bolted-on approach to AI. They’re asking employees to apply it in traditional workflows to achieve the same outcomes, caring about what is produced. They aren’t enabling their workforce to optimize the process; they’re ignoring how the work gets done.

But there are organizations that have successfully become AI-enabled by taking an incremental approach. Fink’s research suggests the following activation framework helps legacy organizations transform:

1. Prioritize functions for AI integration

Rather than deploying AI indiscriminately, organizations should identify the functions and workflows where AI can deliver the greatest business impact. These are often areas with high-volume decision-making, repetitive tasks or scalability constraints. For example, the U.S. Army uses AI to update 300,000 personnel descriptions in a week, a process that if done solely by humans would take nearly six years.

To prioritize functions correctly, AI initiatives must align with overarching business objectives, whether that’s accelerating product development, improving customer experience or enhancing operational efficiency. By focusing on strategic outcomes rather than technological novelty, organizations can ensure AI integration drives meaningful value.

2. Deconstruct work into skills and tasks

To unlock AI’s full potential, leaders must first understand the nature of work itself. Deconstructing roles into their component tasks and required skills helps identify where automation is feasible and where human value is essential. This approach clarifies opportunities for augmentation—where AI handles low-value or routine tasks and humans focus on judgment, creativity and relationship building.

WPP, one of the world’s largest advertising companies, deconstructed more than 50,000 job titles across its network by classifying and identifying capacity gains made possible by AI tools. Partnering with data scientists and leveraging LLMs, WPP curated a list of approximately 150 AI-related and adjacent technologies and then standardized and streamlined job titles across the company. The result: 50,000 titles became a far more manageable set of 6,000 consolidated titles.

3. Prototype and pilot

With prioritized functions and deconstructed tasks in hand, organizations can begin small-scale experimentation. Prototyping and piloting allow teams to test AI tools in controlled environments and gather critical feedback before broader rollout. These pilots should assess both technical performance and user experience, accountability clarity and error tolerance.

Some companies do this via a hackathon, which combines technical and non-technical employees to design innovative use cases that can scale and generate excitement and awareness of these new tools.

4. Diagnose gaps

Before scaling, readiness gaps surfaced by pilots must be addressed. These often fall into three categories: skills, data and governance.

Skills: Despite nearly two-thirds of organizations implementing generative AI in some capacity, i4cp data show only 39% offer training to help their workforce use it.

Data: AI models require clean, labeled and connectable datasets to perform well. AI-native companies often have a knowledge management advantage because their prior digital strategies like robotic process automation require organized, co-located data.

Governance: Such structures must be clarified. Who is responsible for decisions about AI deployment, data use and risk mitigation? Some companies that have unleashed advanced chatbots into their workforce haven’t required vetting or certification by legal and IT teams before they’re shared across the organization. In such cases, even the most promising pilots are unlikely to succeed at scale.

Sustainable AI integration requires more than localized success stories; it demands alignment across enterprise systems. HR processes such as performance management, compensation and talent development must evolve to reflect AI-augmented roles, shared accountability and new value drivers.

Organizations must rethink how they evaluate contribution, accountability and success. It’s no longer just about outputs; it’s about clarifying how the work gets done and who is responsible for what. When a generative AI tool makes a mistake, is the fault with the data scientist who trained it, the engineer who maintained it or the manager who scoped its application? Performance systems must evolve to address accountability in this new framework and the interplay between human and machine.

On the infrastructure front, IT must support continuous data flows, API integration and platform engineering. And governance bodies—spanning HR, IT, legal and compliance—must work in concert to ensure responsible use, privacy and fairness. Aligning these systems enables AI to move from siloed experiments to enterprise-wide transformation.

6. Scale and iterate

The final phase is not a finish line, but a commitment to continuous evaluation and improvement. Scaling pilots successfully requires more than replication—it demands adaptation based on context, ongoing feedback and evolving use cases, and a dedication among the workforce to learn new methods and adhere to best practices and digitally-safe behaviors.

To do this, leaders must embed feedback loops, retraining protocols and post-implementation reviews to refine their approaches. The organizations that thrive will be those that continually learn, adapt, and evolve—balancing automation with empathy, precision with judgment and speed with care.

Conclusion

AI isn’t simply another wave of automation. It’s a profound shift in how value is created, captured and distributed within organizations. It’s time for legacy institutions to stop affixing AI to existing processes and blow some stuff up. Leaders at these firms must identify workflows most critical to achieving their company’s key priorities, surgically detonate those and with a blank sheet of paper and working with both upstream and downstream teams, engineer the best new way to get not the same—but better—results.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird