AI Made Friendly HERE

IBM’s Martin Keen: Communication and Understanding Are Key to Generative AI Adoption

SHRM research reveals that, while most organizations are still in the early stages of AI adoption, generative AI is the most widely adopted artificial intelligence tool. According to SHRM’s August 2024 Current Events Pulse survey, over 1 in 4 HR leaders (28 percent) report their organizations are still learning about AI, while only 1 percent say their organizations have progressed to advanced stages—such as clustering, scaling, or driving AI adoption. Despite navigating the AI learning curve, 28 percent% of HR leaders report their organizations have implemented generative AI tools.

While the SHRM survey reveals that more than half of HR leaders (55 percent) play a direct or supporting role in AI decision-making processes, almost two in five (38 percent) report limited or no theoretical understanding of AI. Additionally, four in five workers (80 percent) classify their understanding of AI as only beginner or intermediate. As AI rapidly evolves, substantial risks may arise from skills gaps in AI literacy among employees and leaders, potentially hindering productivity and innovation.

Martin Keen, IBM master inventor and host of IBM Technology’s YouTube channel on AI, says that effective prompt engineering—the ability to craft precise and effective prompts for AI—will be key for organizations to see a return on investment from the technology. “English is the new programming language,” Keen states. “The clearer we can be about what we want as the output, the better output we’ll get.”

Keen explains that precise communication significantly improves the quality of AI outputs by structuring the model’s thought process. It can also reduce hallucinations—incorrect or nonsensical outputs by AI models—which pose critical concerns for organizations, he adds.

But hallucinations are “probably never going away” entirely, at least in the near future, Keen says. “We always need to set the context, and we can use things like custom instructions to really define how the AI should go about looking up its information.”

Insufficient AI literacy also poses significant risks for organizations managing sensitive information. Keen addresses the issue of “shadow AI,” where employees utilize unsanctioned AI tools, which can lead to serious data security vulnerabilities.

“If I am an employee of a company using, let’s say ChatGPT, and I’m not supposed to be, I could potentially be sharing documents that are confidential within the company, and now they’re part of the training dataset for the next version of ChatGPT,” Keen warns.

While some leaders might consider blocking AI functions, Keen instead advocates for clear governance models that regulate AI use within organizations to ensure sensitive data is handled appropriately.

Watch Keen’s full comments on Tomorrowist:

This article has been paid for by a third party. The views and opinions expressed are not those of Newsweek and are not an endorsement of the products, services or persons mentioned.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird