AI Made Friendly HERE

Letter from the Editor: How AI and automated content power information on OregonLive

We’ve all heard a lot recently about ChatGPT and artificial intelligence, otherwise known as AI. What’s it all about and how do these things relate to journalism?

Recent developments in AI, and the meteoric rise of ChatGPT, have vaulted the technologies into the headlines.

Simply put, artificial intelligence is machines helping solve problems, using reasoning in a way that we have tended to reserve for people. ChatGPT is artificial intelligence that, once given a prompt, generates conversational text that mimics human-produced writings.

If your prompt was, for instance, “write me a love poem for Valentine’s Day,” the system would scour the vast internet and spit out what might look like poetry. Quite possibly bad poetry.

Another example is the chat function on a retailer’s website. A customer types in a question, such as “How do I splice or repair a sprinkler system where squirrels have chewed through the dripline?”

The chat-bot, using an automated language system, puts together the information in the question (the words “splice” and “dripline”), draws on datasets it is connected to (how to repair driplines) and proposes an answer: “Please check under fittings.”

Many companies, including The Oregonian/OregonLive, have been using forms of artificial intelligence for years.

For instance, programmatic advertising makes up the bulk of ads you see on websites such as OregonLive. Programmatic advertising is largely placed and delivered by computer programs.

Let’s say an auto dealer wants to deliver advertising to people who are looking to buy a new car. The request is plugged into a software program, which then sends out bids to websites in an effort to find such an audience. No humans are involved beyond the initial request.

Machine learning is a type of AI. If you play chess against a computer, and the computer learns from its mistakes and gets better and better, and eventually trounces you every time, that is an example of machine learning.

The Associated Press, a wire service cooperative that provides news content to thousands of news outlets, has been using artificial intelligence and automation for almost a decade. “Our foray into artificial intelligence began in 2014, when our Business News desk began automating stories about corporate earnings,” the AP says on its website. “Prior to using AI, our editors and reporters spent countless resources on coverage that was important but repetitive and, more importantly, distracted from higher-impact journalism.”

In plain language, the AP creates automated text articles from structured sets of data, such as company earnings reports. Similarly, AP creates some game summaries and sports previews using natural language generation, where sets of sports statistics are transformed into text articles.

Now, automated content has come to OregonLive. Working with a company called United Robots, we are publishing dozens of articles about real estate transactions around Oregon each week. During the test phase, the newsroom carefully reviewed the articles for quality control.

Just as the Associated Press does with sports statistics, United Robots takes structured data for real estate transactions and turns that data into brief articles written in a standard news style. As the editors in our newsroom edit the wording and content, the company’s programmers adjust how we want the items written and what should be included. For now, we are publishing residential transactions only, and we note the content is generated by United Robots.

I wrote recently about the potential perils of artificial intelligence, which can rapidly and easily generate what might appear to be factual news articles. Imagine how difficult it might be to sort truth from fiction if the internet is flooded with a thousand times more fake articles generated to misinform.

In the case I wrote about, we don’t know if artificial intelligence or language generation programs created articles that erroneously claimed someone had been exonerated, when in fact he had not.

But we do know people are increasingly using the technology for research and writing, often in quite legitimate ways. And sometimes those same people suffer the consequences of the technology’s shortcomings.

The Associated Press reported that two red-faced lawyers in New York and their firm were fined $5,000 after a judge found they had used ChatGPT to generate parts of the documents they presented to the court. Unfortunately for them, ChatGPT regurgitated for them non-existent court rulings with fake quotations and made-up citations, which they unwittingly submitted along with their own research.

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the judge said, in fining the attorneys and their firm. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

The law firm, for its part, denied it had acted in bad faith. “We made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” they argued, according to The Associated Press.

The case is a good reminder that AI and ChatGPT are far from flawless. The intervention of human editors – and attorneys — may be forever needed as an essential part of the equation.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird