AI Made Friendly HERE

The State of AI in Minnesota Business

Is artificial intelligence going to take our jobs? With the introduction of advanced chatbots like ChatGPT and Jasper late last year, that’s the question on many people’s minds these days. On Monday, Twin Cities Startup Week organizers convened a pair high-level panels to sort through the latest round of AI hype.

The answer, of course, is complicated. To hear AI advocates tell it, the technology probably won’t ever fully replace human work, but it may become complementary – and perhaps even necessary amid declining populations in Minnesota and elsewhere.

First things first: Artificial intelligence, as a concept, isn’t really new. The content on our social media feeds, for instance, has for years been determined by AI programs. The very term “artificial intelligence” dates back to at least the 1950s, when Dartmouth College mathematics professor John McCarthy convened a summer workshop on the topic in New Hampshire. Since then, there have been cyclical patterns of “AI summers and AI winters,” said Justin Grammens, founder and CEO of St. Paul-based software firm Lab651.

“This feels a little bit different,” said Grammens, who moderated a Monday morning panel at his company’s headquarters on Vandalia Street. “There are a lot of large companies – Google, OpenAI, Microsoft, and Meta – sort of pushing this harder and harder.”

What makes programs like ChatGPT different is their relative fluency in human language. Unlike AI programs of the past, so-called “large language models” like these are directly available to the average consumer; they’re not just running in the background. But the extent to which programs like ChatGPT actually “understand” the material they spit out is still debated; essentially, these tools are making sophisticated statistical guesses based on vast amounts of written material on the internet. One researcher has used the term “spicy autocorrect” to describe them. Yet that makes them particularly adept at things like passing law school exams, or generating reams of “SEO-friendly” content.

There’s plenty of fear – and some optimism – about these programs. And as with many things tech- and startup-related, hyperbole is ever-present. “If we keep our shit together, we could save humanity. If we don’t keep our shit together, it was a good time,” joked Cicerone founder Andrew Eklund, riffing on fears of runaway AI loudly trumpeted by the same executives pushing the technology.

To be sure, automation, generally, has played a pivotal role in our way of life for decades. In earlier years, automated technology replaced or displaced legions of blue-collar workers in factories. Now, the latest AI technology seems poised to replace or displace “knowledge workers” of all kinds, including, yes, journalists and magazine editors. But panelists argued that AI tools will simply augment human work, not utterly replace it. At the Lab651 panel, Vogel Venture owner Heather Boschke said the technology could be used by small companies that might not have budgets to hire marketing staff, for instance. “A lot of AI started on the marketing side,” said Boschke, who also serves as adjunct professor of marketing at Metro State University. “There’s no excuse for not having a marketing strategy anymore. Anyone can do it with the help of AI.”

Indeed, fellow panelist Lori Ryan, founder of AI information hub Lorignite, said she turned to ChatGPT to help her build a website and clean up reams of data for her business.

Still, the quality and accuracy of output from AI programs varies significantly – a point that purveyors of the technology themselves acknowledge. Microsoft’s Bing chatbot includes this disclaimer: Bing is powered by AI, so surprises and mistakes are possible. Using the technology without any human intervention is a fraught decision: Consider the recent example of Microsoft’s AI-generated travel guides, which, among other erroneous suggestions, advised travelers to check out the Ottawa Food Bank as a dining option. When asked about the errors, Microsoft attributed it to “human error,” and not “unsupervised AI.”

Proceeding with caution

Despite the risks, plenty of executives and businesses are still keen to give new AI tools a try – especially if it might save them money or time. At a Monday afternoon panel in the Lindahl Founder’s Room on the U of M campus, execs from two Fortune 500 companies shared a few practical and theoretical uses of the technology. Counter to buzzy headlines about AI, they’re taking a more measured approach.

A panel of experts on Sept. 18 gathered in the Lindahl Founders Room to talk AI trends.A panel of experts on Sept. 18 gathered in the Lindahl Founders Room to talk AI trends.

Rebeccah Stay, global leader of data science at Cargill, noted that there’s “very little we put out without someone intervening.”

“We’re not replacing people. We’re just giving them more information to decrease uncertainty,” Stay said. “For us, it’s never a question of how many [full-time employees] can we get rid of; it’s more about how can we make decisions better and faster.”

Abby Steele, director of UnitedHealth Group’s artificial intelligence and machine learning responsible use program, said her focus is on “matching AI to the task to which it really needs to be applied.”

“It is very hyped right now,” Steele said. “I think there’s a lot of interest in trying it out in places where more simple, more transparent approaches are just as performant.”

Still, at that same panel, there were some stark differences in opinion on the future of AI. Jeff Aguy, CEO of St. Paul-based economic development firm 2043 SBC, said it’s important to be “intellectually honest” about the impact the technology could have on the workforce. “From my perspective, we have to be honest that AI is going to replace a lot of people’s jobs,” Aguy told attendees. “That’s just the reality of it.”

He pointed to the example of a web designer: Tools like Squarespace can help entrepreneurs with limited technical knowledge build websites on their own. As another example, Aguy brought up McDonald’s restaurants in airports, which are all generally self-service on the front-end. He said these are “massive changes we’re already seeing.”

It’s not all necessarily bad, though. Martha Engel, partner at Best & Flanagan LLP, said that AI could make litigation less expensive in the long run. “You won’t have to have dozens of first-, second-, or third-year attorneys looking at documents for hours,” she said. “That will certainly go away.”

Panelists at both sessions agreed that critical thinking skills will become paramount in the age of AI-generated content. “People have to be more critical of the information they receive,” Stay said. “A lot of the AI we were using before was pretty simple and straightforward. Now, with [generative AI], things are getting a lot more complicated, and certainly, could very much be untrue. We’re going to look to people to have a more critical eye.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird