AI Made Friendly HERE

What Is Your Next AI Move?

Joe Manok, VP for Advancement, Clark University.

The answer will define whether your organization is overtaken by disruption or leads the way in shaping the intelligent, mission-driven teams of tomorrow.

Most executives fear moving too fast, but the riskier strategy is actually doing nothing.

From finance to education, advances in AI are changing how things are done and the scale and scope of human work. In some roles, it is eliminating the roles completely.

When ChatGPT reached 100 million users in just two months, that week the president of Clark University, where I lead the Advancement team, asked me: “If we look back in five or 10 years, will we believe we built the best version of ourselves to engage our community?” Like many nonprofits, we were still operating with tools like spreadsheets that were designed in the 1980s. AI gave us an opportunity to leapfrog.

We built an AI lab with a corporate partner to explore what was possible given the fast-moving pace of the technology. What we discovered applies not just to university advancement offices but to any leader wrestling with AI’s promise and peril.

The Conflict Between Enthusiasm And Uncertainty

Most senior leaders are energized by what AI promises to offer: cost savings, personalization and efficiency. Studies from the Boston Consulting Group to Procter & Gamble have reported how teams that feature humans and AI working together are a force to reckon with.

On the ground, many staff see pilots who aren’t coordinated, uncertainty about how AI does not fit with missions, privacy risks without clear policies and concerns about AI replacing the team. Some organizations are moving too fast, jeopardizing trust. Others are moving too slowly, risking irrelevance.

A common question we hear is, “I am here now, what can I do next?” So we developed an AI Risk and Readiness Matrix that plots two frameworks: AI Readiness State and AI Ethics Maturity.

AI Readiness State (AIRS) Framework

This model maps institutional maturity in AI readiness that could span from a Reactive mode, where AI is ignored with legacy systems dominating the environment, all the way to a Strategic mode, with AI embedded in culture, planning, budgeting and operations.

The AI Ethics Maturity Spectrum

Equally important is how responsibly AI is adopted. Too often, AI ethics debates frame it as “use or not to use,” when the reality is more nuanced. Our Ethics Spectrum tracks maturity from Ad Hoc (no review) to Embedded (policies, disclosures and board oversight as routine).

Actionable Insight: The Risk And Readiness Matrix

By combining readiness and ethics, we built a simple tool: the Risk and Readiness Matrix. This maps where institutions are and where they need to go next. The quadrants range from Vulnerable (Low Readiness, Low Ethics) to Future Ready (High Readiness, High Ethics): a stage where AI is embedded strategically, guided by governance and reinforced by transparency.

This helped us clearly identify where we are and the immediate next steps at any given time to take us from a specific state to the immediate next state that could be moving with AI readiness or with Ethics and governance.

So What’s Your Next AI Move?

1. Map Your Readiness And Identify Your Blind Spots

Begin by plotting your organization on a readiness-responsibility grid that considers both technical capacity and ethical maturity. Be brutally honest about your technical debt, governance gaps and cultural resistance (this was personally my biggest blind spot). Getting clarity about limitations is what makes progress possible.

2. Make Readiness Mapping An Ongoing Process

I use the tool on a regular basis to plot where we are and identify where we are heading next. I also plot the progress over time to see that our journey had several ups and downs.

3. Build Guardrails Before Tools

Governance can’t be left to chance, especially with fast-moving technologies. We surrendered our ability to make decisions on deploying any AI tools to two oversight groups: a group of senior leaders outside of the advancement team to approve any tool before we use it and a board committee to audit our work on a regular basis to ensure that it aligns with institutional values.

4. Pilot With Discipline

The word pilot seems to carry a fascinating weight when it comes to change, particularly during tech transformation. Start small with a small group, and make sure you include AI skeptics and AI enthusiasts alike.

5. Balance Ideals With Action

Leaders often rely only on the risks of deploying AI. Particularly in university settings, we try to default to the idealistic posture. Tech leaders, particularly in smaller startups, tend to pivot towards the aggressive/risky quadrant. It is important to balance yourself so you pivot toward the future-ready quadrant in a steady yet fast-moving way.

6. Invest In The Team

In our case, we took the decision to philosophically invest in AI tools to grow our team’s outreach and firepower and not to simply automate and replace team members. That approach included us working now on an AI literacy program that will enable the team to continuously learn and upskill to be future-ready.

Overall, whatever you do, it is important to evaluate where you are, and as you evaluate the ethics of using AI, evaluate the ethics of inaction on such a transformative technology.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Originally Appeared Here

You May Also Like

About the Author:

Early Bird