Listen to the article
This audio is auto-generated. Please let us know if you have feedback.
Can artificial intelligence be ethical? No, according to a panel of experts. But neither can a car.
“When there’s a vehicle crash, no one says a car was being unethical,” said Bennett Gebken, CTO of Huntridge Labs, who spoke at a Jan. 22 CivicLabs Roundtable Series on defining ethical AI.
As more local governments explore AI’s potential for increasing efficiency and cohesive AI oversight lags behind the rapidly expanding technology, questions about how to use AI ethically — and how to define ethical AI — are on the rise.
“It’s the wild west right now,” said Danielle Mouw, a procurement analyst with the General Services Administration.
But behind the headlines and forecasts AI has spurred, the CivicLabs panelists stressed a key component of the emerging tech: humans.
“The burden of making it ethical is on the human — the curator — in the loop,” said Joel Natividad, co-CEO of datHere. “That we are fair, that we are producing fair data … that we are describing our data holdings in such a way that we can reproduce and trace the answers.”
How can local government leaders use AI ethically? They can start by knowing what they want it to accomplish, said the panelists.
“AI’s not a shiny object, it’s solving a problem,” said Jaime Gracia, director of corporate affairs for The Wolverine Group. Those problems can range from lowering procurement lead times to reviewing documents. “It’s always about streamlining, speed, agility; that’s great, but it has to be done correctly, and more importantly, it has to be defendable.”
That makes it critical to find a vendor that provides a level of basic transparency and demonstrates how its AI works, how problems can be reported and how it tracks performance, according to Gracia.
“There’s a series of protests happening around AI being used, but if I can’t explain to you how that AI was used, how the AI works, then that protest more than likely will get sustained,” Gracia said.
Outlining the goals and the stakeholders can also be key.
As an example, Gebken pointed to a Veterans Affairs project to develop AI that would increase efficiency in processing disability and healthcare claims. If that AI rapidly denied claims, the goal of efficiency would be accomplished — but it would increase the burden on the public.
“If we took the same problem set, trying to increase efficiency, and built an AI solution that helps flag people’s mistakes, helps them get it right the first time, we could accomplish the same efficiency goal, but aligning the AI in a way that increases efficiency without creating a new burden on the public,” Gebken said.
AI is not a machine that’s good or bad, Gebken stressed. “It’s a system that’s designed to be verifiable, accountable and aligned with public interest standards.”
Accessibility — and ensuring AI is free from ableist bias — is an ethical issue the technology is still catching up on, panelists said.
“Every person really is unique, and I think that’s the key,” said Owen Barton, CTO of CivicActions. “Most AI systems work broadly on averages. They’re going to try to come up with the solution that best fits the training data that it’s seen, and most of the training data that it’s seen will not be about people with disabilities.”
Will AI be able to respond to a unique voice or way of typing? It’s unclear. “Because the space is so new, we don’t have those kind of practices to be able to get assurance that these AI systems are accessible,” Barton said.
The solution comes back to having a “human in the loop” — particularly someone who can represent the disabled community, said Mouw.
“Don’t assume that you can figure that out in your own testing — make sure you have that representation and that skillset to train on and test the AI and how it’s actually doing for people who use screenreaders, for people who cannot see or hear,” Mouw said.
Ethics should become an embodiment of the technology, as opposed to something to add to it later, said Mouw.
“Setting up those metrics together, I think, is just going to be a really big part of operationalizing ethics,” she said.
AI is complex, but it can be steered in certain directions, added Ron Jones, CEO of G2X.
“Giving a dog a treat is akin to telling an AI good job,” Jones said. “It mostly wants to please the human, please the user.”
The real challenge is not whether AI is ethical, but whether its designers and users are, panelists said.
“It really comes down to how we design, govern and oversee these tools and the people that are using these tools,” said panel moderator Liz Tupper, senior director of product for CivicActions.
