AI Made Friendly HERE

AI can help combat unreliable information

At a historic hearing on May 16 to study a regulatory framework for artificial intelligence, or AI, U.S. Sen. Richard Blumenthal floated a profoundly clear, practical and easy-to-understand idea for protecting consumers from many of the potential risks posed by technologies like ChatGPT:

“Should we consider independent testing labs to provide scorecards and nutrition labels or the equivalent of nutrition labels?” he asked. “Packaging that indicates to people whether or not the content can be trusted, what the ingredients are?”

Just as transparent nutrition labels have helped encourage healthier food choices, transparency in content labeling can help consumers make highly informed decisions about the content they consume, while also creating a framework for accountability.

Technology has given rise to a proliferation of online platforms that traffic in — depending on your point of view — falsehoods, disinformation and even conspiracy theories. It’s never been easier to access news and information, yet the rapid deployment of AI systems is making it more difficult than ever for consumers to objectively distinguish between what is true and what is not.

Congress doesn’t always move at the pace of technology — a point that was made in the hearing by both Sen. Blumenthal and his Republican counterpart, ranking member Josh Hawley of Missouri. But what if consumers could be equipped with the tools needed to accurately evaluate the trustworthiness of information with the same scrutiny as an unbiased data scientist and expert journalist?

Those resources are available now, courtesy of technology itself.

The stakes are high. Bad information aggravates our politics and our culture, it threatens health and safety, and the evidence suggests it is increasingly eroding trust in our democratic system. On the other hand, it has become increasingly clear that efforts to restrict contrarian points of view through censorship measures and content blocking are fueling the growth of alternative news unencumbered by journalistic ethics and standards.

For the past three years, I’ve been part of a team of technology engineers working to develop an AI-powered technology that can effectively combat unreliable information in a way that is transparent and conducive to accountability.

The central idea behind this technology is that reliable rating systems empower consumers by enabling better decision-making. Rating systems exist for everything from credit worthiness to automobiles to movies to wine.

Yet the largest source of information in the world — the internet — is unrated.

This has had — and will continue to have — profound implications for technologies like ChatGPT, which draw from a pool of online content, sometimes reliable, sometimes not.

The quality of online information, its adherence to journalistic principles and the general reliability of an author and domain source can be objectively measured, and these calculations can be achieved virtually instantaneously with the power of artificial intelligence.

The solution is as simple and as complicated as understanding how news and technology mesh.

For example, quality news stories generally have bylines. Valid headlines are free from exaggeration; they describe the story rather than appeal to emotion (i.e. clickbait). Points of view adhere closely to reporting facts. Arguments are substantive rather than personal attacks. And quality news stories are hosted by transparent websites that share their mission, ownership and policies.

News reports that deviate from these standards are of lesser quality and, therefore, are more likely to be vehicles for information that is objectively false.

Think of it this way: The internet has become something of an external hard drive for our brains — a concept authoritatively developed by behavioral psychologists who have studied how the internet has altered the way we think. As a result, the “Google effect” has trained consumers to rely less on their own powers of reason than on accessing the interpretations and conclusions (often disguised as “statements” — true or false) of others.

Our objective is to create an information market in which consumers are better informed. When that exists, it allows for healthy and productive debate. That healthy debate is vital now as cultural forces are leveraging technology to further polarize society by digging us into our positions, however accurate or inaccurate — or, for that matter, healthy — those positions may be.

Patrick C. Condo is the founder and CEO of Seekr Technologies Inc.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird