In an echoey classroom inside the former Johnston Elementary School in Wilkinsburg, six people are seated around a long table, looking for the bad cat.
Instructor Samantha Finkelstein uses this term to explain how large language models, like ChatGPT, work. A “bad cat” is a question or a fact outside of the chat bot’s information base. Finding it helps show what the model can and can’t do.
For example, if an AI model was trained using only pictures of black cats, then it won’t recognize an orange cat later on. A model built on language, not math, can’t create a budget.
Finkelstein, the Director of Public Technology Initiatives at the nonprofit Community Forge, is leading this three-part AI literacy and ethics course. The course is part of a partnership with the Carnegie Mellon-led Open Forum for Artificial Intelligence.
More and more people are using artificial intelligence tools at work; some by choice, others under pressure.
Community Forge Executive Director Mike Skirpan, who has a background in tech ethics and teaches at CMU, said this training program got its start after he saw people getting frustrated or getting bad results with AI.
Skirpan was also concerned about the rise in companies that offer to build custom AI agents for businesses and nonprofits.
“They were trash, quite frankly,” Skirpan said.
The people who actually do the work every day can build a much better tool for themselves than any outside companies, Skirpan said, they just need to learn how.
Skirpan started building this three-part course to teach people AI literacy, how to build an agent that can ease or automate some tasks, and how to evaluate whether these agents are useful and reliable.
The course is open to and free for nonprofit and government workers, because Skirpan is worried about disruption from shifting federal policies and funding.
“The ultimate lose here is if human services and our social sector get worse quality over the next 10 years because we brought in AI, which I think is a real risk,” Skirpan said. “Then we all lose and we hurt our most vulnerable people.”
How to choose and use the right tool
The students in the class work in fields from education and the arts to construction. Some of them have been toying with AI tools, but aren’t sure of the best ways to use them.
Evan Varrato is taking the course because he feels a responsibility to understand what tools could help his group’s mission. He’s the head of construction for Rebuilding Together Pittsburgh, which rehabs homes for low-income people.
He said even useful tools, if not put to work correctly, can cause damage.
“If I’m going out to a project site and we’re going to be using a circular saw, I’m definitely going to take some time to teach somebody how to use that tool before I actually let them out in the real world to use that,” Varrato said.
Finkelstein walked the group through the steps of creating an AI agent using a platform from OFAI called DARE. The students are able to upload a user manual — how the tool should function — and documents related to their work; maybe that’s a mission statement or a document with information that they want the agent to pull from.
Next, the students can experiment with different prompts. They can ask the agent to help them brainstorm, or to ask questions, much like an adviser.
Because popular chatbots are finding patterns in language, Finkelstein said the way people talk to them matters.
“Explaining what type of conversation you want to be having with them, you can shape the way they engage with you, which has drastic impacts on the quality and relevance of the output you get,” Finkelstein said.
Kristin Kalson, who works in fundraising at Crossroads Foundation, quickly found a “bad cat.” She asked the agent how it could help with her work and it suggested she upload all of the foundation’s donor information.
“It’s antithetical to what I am tasked with doing and preserving people’s confidentiality. And so this is definitely a little bit of a quagmire,” Kalson said.
Prompted further, the agent recommended Kalson not upload raw data, to protect privacy.
Responsible use of AI is important to the course’s instructors. Part of that is limiting its use to certain tasks — not trusting it with everything.
Using AI strategically, Skirpan said, can help retain human autonomy as much as possible.
