AI Made Friendly HERE

Salesforce research finds business buyer and consumer trust levels in AI decline despite the generative hype cycle

Trust has long been one of the core tenets of the Salesforce corporate DNA and that’s not changed with the firm’s pivot this year to generative AI. In every presentation or pitch from Salesforce execs, right up to CEO Marc Benioff, the importance of users being able to trust this new tech is hammered home. Check this out from Salesforce’s UKI CEO Zahra Bahrololoumi recently: 

I talk to so many leaders and they’re all so excited. They’re excited. They can see the benefits. They can see the horizon. They can see the future. But guess what? They’re cautious. Very cautious. And why is that? Well, because there is an AI trust gap. Every company wants to embrace [AI]. In fact, for many it is the number one priority. But your customers are not so eager, and that’s because less than half of them trust companies with their data.

Salesforce’s latest State of the Connected Customer report, based on the views of more than 14,000 consumers and business buyers across 25 countries, validates the importance of this, with the data clearly illustrating that almost half of respondents don’t trust their companies to use AI ethically, while nearly two-thirds (63%) fear that generative AI will lead to unintended consequences for society and 

There is, of course, a wider set of questions around trust. The Salesforce report notes: 

A baseline of trust is necessary for business transactions to take place. However, trust is complex and multifaceted: a customer can, for instance, trust a company’s product quality without trusting its environmental commitment. Many customers trust companies to respect their privacy and be truthful, but only half say they trust companies in a broader sense.

When that ‘half’ is broken down, there’s a more confidence among business buyers – 67% say they generally trust companies – compared to consumers, where less than half (47%) can say the same. 

AI’s new front

But the promised AI revolution has opened up a fresh front here, with 68% of respondents saying advances in AI make it even more important to be able to trust companies. But right now, according to Salesforce’s data, nearly three-quarters of customers express concern about the unethical use of AI. 

Breaking that down further, currently less than half of customers (45%) say they ‘mostly trust’ organizations to use the tech ethically, while a third (33%) say they ‘mostly distrust’ organizations to use AI ethically. That’s far from a ringing endorsement whether your glass is normally half full or half empty. 

It is clear however that the generative AI hype cycle has, for better or worse – delete according to personal prejudice – caught the imagination of essentially everyone. From Baby Boomers through GenX and Millennials to GenZ, the main sentiment among respondents to the Salesforce study is curiosity about the tech. 

But then the demographic divides start to show, as Baby Boomer and GenX cite suspicion as their next reaction, whereas for Millennials and GenZ, excitement is the order of the day, with suspicion and anxiety bottom of the pile of their reactions. The Salesforce report argues: 

Generative AI’s promises — and risks — are still taking shape in customers’ minds. Curiosity is common across the board, but different sentiments follow depending on age. Millennials and Gen Z — generations that came of age with the Internet — are more excited by generative AI, while older generations are more suspicious. As younger generations gain influence in the market, attitudes towards the technology may warm.

Business vs Consumer

There’s another important divide exposed in the report as well, this time between the attitudes of business buyers and consumers. Nearly three-quarters (73%) of the former reckon that customers are open to the use of AI to improve their experience, but that figure falls to just over half (51%) of consumers. Similarly, 72% of business buyers reckon generative AI will help companies to serve customers better, but less than half (48%) of consumers take the same view. 

Interestingly, despite the hype around generative AI this year, there has been a decline in the percentages compared to 2022, when 82% of business buyers and 65% of consumers were open to the use of AI. In terms of job roles, those who believe generative AI will help companies better serve their customers, IT professionals are the most enthused (84%), followed by Commerce (79%), Marketing (70%), Service (62%) and Sales (61%). 

So what’s needed to boost trust levels around AI? The simple answer – the human touch. An overwhelming 89% of respondents say it’s important to them to know whether they’re communicating with AI tech or a human being. With only 37% of them trusting AI to be as accurate as a human, 80% state it’s important to have people validating the output of the tech. 

That human validation aspect is cited by 52% of respondents as a factor that can increase customer trust in AI, along with more customer control (49%), third-party ethics reviews (39%) and additional government oversight (36%). But the main thing that would make a difference is greater visibility into how AI is being used by an organization, ranked number one by 57% of respondents. 

My take

In the foreword to the report, Michael Affronti, Salesforce GM, Commerce Cloud states: 

Generative Artificial Intelligence holds tremendous promise to help get businesses through these challenges. However, the stakes are high for getting it right. As we stand at the doorstep of a completely transformed way of doing business, it’s a crucial time to check in: what do customers want at this moment? How can companies implement AI responsibly, to build trust?

It’s a message that we’ll be hearing a lot of in a couple of weeks at Dreamforce, now branded as the industry’s “largest generative AI show”. In the meantime, Salesforce has issued its own AI Acceptable Use Policy, (AI AUP) which outlines functions and areas in which the company will not allow its AI products to be used, including weapons development, adult content, profiling based on protected characteristics, biometric identification, medical or legal advice, or any decisions that may have legal consequences. 

The policy was drawn up under the auspices of Paula Goldman, Salesforce’s Chief Ethical and Humane Use Officer, who states in a blog posting: 

It’s not enough to deliver the technological capabilities of generative AI, we must prioritize responsible innovation to help guide how this transformative technology can and should be used. Salesforce’s AI AUP will be central to our business strategy moving forward, which is why we took time to consult with our Ethical Use Advisory Council sub-committee, partners, industry leaders, and developers prior to its release. In doing so, we aim to empower responsible innovation and protect the people who trust our products as they are developed.

This is a good start in the ongoing evolving debate around generative AI use and I’d clearly expect to see similar statements of intent coming from other enterprise tech vendors. The trick now, of course, lies in enforcement of the policy and that’s never quite as straightforward in practice. More to come on this topic from Dreamforce next month. 

Originally Appeared Here

You May Also Like

About the Author:

Early Bird