AI Made Friendly HERE

FBI’s AI work includes ‘Shark Tank’-style idea exploration, tip line use case

The FBI’s approach to artificial intelligence ranges from figuring out how bad actors are harnessing the growing technology to adopting its own uses internally, officials said Tuesday, including through a “Shark Tank”-style model aimed at exploring ideas.

Four FBI technology officials who spoke at a GDIT event in Washington detailed the agency’s focus on promoting AI innovations where those tools are merited — such as in its tip line — and ensuring uses could ultimately meet the law enforcement agency’s need to have technology that could later be defended legally. 

In the generative AI space, the pace of change in models and use cases is a concern when the agency’s “work has to be defensible in court,” David Miller, the FBI’s interim chief technology officer, said during the Scoop News Group-produced event. “That means that when we deploy and build something, it has to be sustainable.”

That Shark Tank format, which the agency has noted it’s used previously, allows the FBI to educate its organization about its efforts to explore the technology in a “safe and secure way,” centralize use cases, and get outcomes it can explain to leadership.

Under the model, which ostensibly is named after the popular ABC show “Shark Tank,” Miller said the agency has put in place a constraint of 90 days to prove a concept and at the end the agency has “validated learnings” about cost, missing skill sets that are needed, and potentially any concerns for integrating it in the organization. 

“By establishing that director’s innovation Shark Tank model, it allows us to have really strategic innovation in doing outcomes,” Miller said. 

Some AI uses are already being deployed at the agency.

Cynthia Kaiser, deputy assistant director of the FBI’s Cyber Division, pointed to the agency’s use of AI to help manage the FBI tip line. That phone number serves as a way for the public to provide information to the agency. While Kaiser said there will always be a person taking down concerns or tips through that line, she also said people can miss things. 

Kaiser said the FBI is using natural language processing models to go over the synopsis of calls and online tips to see if anything was missed. That AI is trained using the expertise of people who have been taking in the tips for years and know what to flag, she said, adding that the technology helps the agency “fill in the cracks.” 

According to the Justice Department’s use case inventory for AI, that tool has been used since 2019, and is also used to “screen social media posts directed to the FBI.” It is one of five uses listed for the FBI. Other disclosed uses include translation software and Amazon’s Rekognition tool, which has attracted controversy in the past for its use as a facial recognition tool.

To assess AI uses and whether they’re needed, the officials also said the agency is looking to its AI Ethics Council, which has been around for several years.

Miller, who leads that body, said that council includes membership from across the agency, including the scientific technology and human resource branches, and offices for integrity and compliance, and diversity, equity and inclusion. Currently, the council is going through what Miller called “version two” in which it’s tackling scale and doing more “experimental activities.” 

At the time it was created, Miller said, the panel established a number of ethical controls similar to that of the National Institute of Standards and Technology’s Risk Management Framework. But he added that it can’t spend “weeks reviewing a model or reviewing one use case” and has to look at how it can “enable the organization to innovate” while still taking inequities and constraints into account. 

Officials also noted that important criteria for the agency’s own use of the technology are transparency and consistency. 

Kathleen Noyes, the FBI’s section chief of Next Generation Technology and Lawful Access, said on Tuesday that one of the agency’s requests for industry is that systems “can’t be a black box.”

“We need some transparency and accountability for knowing when we’re invoking an AI capability and when we’re not,” Noyes said.

She said the FBI started with a risk assessment in which it analyzed its needs and use cases to assist with acquisition and evaluation. “We had to start strategic — I think everyone does,” she said, adding that the first question to answer is “are we already doing this?”

At the same event, Justin Williams, deputy assistant director for the FBI’s Information Management Division, also noted that an important question when they’re using AI is whether they can explain the interface.

“I personally have used a variety of different AI tools, and I can ask the same question and get very similar but different answers,” Williams said. But, he added, it wouldn’t be good for the FBI if it can’t defend the consistency in the outputs it’s getting. That’s a “big consideration” for the agency as it slowly adopts emerging technologies, Williams said.

Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, D.C., covering government technology. Her reporting has included tracking government uses of artificial intelligence and monitoring changes in federal contracting. She’s broadly interested in issues involving health, law, and data. Before joining FedScoop, Madison was a reporter at Bloomberg Law where she covered several beats, including the federal judiciary, health policy, and employee benefits. A west-coaster at heart, Madison is originally from Seattle and is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird