AI Made Friendly HERE

AI Boot Camps Offer to Help Congress Navigate Hot New Technology

Artificial intelligence experts from Johns Hopkins University meet with about 50 congressional staffers in Washington on Friday to talk about the role of AI in government, national security, and ethics.

Princeton held similar sessions in May and June for Congress and federal agency staffers. And next month in Palo Alto, Calif., Stanford University’s Institute for Human-Centered Artificial Intelligence hosts a three-day immersive “Congressional Boot Camp on AI” for more than two dozen Capitol Hill staffers.

Universities say they’re strategically positioned to help lawmakers understand the power of AI and to set guardrails on its relentless reach because they were there at the beginning: Many leading AI breakthroughs, scientists, and technologist emerged from these academic centers.

The question is whether Congress will act on the lessons it’s learning this summer.

“There’s a concern, much like cybersecurity in the past, that the Hill wants to legislate. But they don’t really know how to legislate on the issue,” said Lori Prater, deputy chief of staff for Rep. Vern Buchanan (R-Fla.), who was among the congressional staffers at Stanford’s inaugural AI boot camp in August of 2022.

While several states and the European Union have produced forceful legislation, Congress has struggled to regulate AI amid fears about the need to mitigate serious potential harms arising from the powerful technology. Lawmakers have wrestled also with concerns that doing too much could stifle innovation.

‘Our Record On This Stinks’

Arvind Narayanan, a computer science professor at Princeton who co-writes a blog called AI Snake Oil, notes that while AI is moving very fast, “It takes a while for government to find out what’s going on with technology. Takes a while to regulate. Takes a while for the effects of that regulation to percolate back out into industry and change practices.”

At Stanford, the AI boot camp grew out of technology seminars that started in 2014, inspired by Edward Snowden’s revelations about unauthorized spying by the National Security Agency. Stanford’s original tech camp focused on cybersecurity.

“It revealed for us that there is a huge opportunity for us to consistently educate people,” said Russell Wald, the center’s deputy director.

The boot camp evolved from there, and in August 2022 focused for the first time on the emerging field of artificial intelligence. Three months later OpenAI released ChatGPT, and the power of large language models for many Americans went from science fiction to reality.

“Without guardrails to set the rules of the road, we’re committing ourselves to carrying forward more of the same; extractive, invasive, and often predatory data practices and business models that characterized the past decade of the tech industry,” Amba Kak, co-executive director of the AI Now Institute, which researches the social implications of artificial intelligence and suggests responsible policy, said at a Senate panel hearing this month.

Sen. Mark Warner (D-Va.) said he is well aware of Congress’s record when it comes to similar technology. “We have to show a little humility on this,” Warner has said. “Our record on this stinks.”

A fellow Virginia Democrat, Rep. Don Beyer responded to the rise of AI by enrolling at George Mason University for a master’s degree in machine learning.

Stanford HAI

Located in the heart of Silicon Valley, Stanford HAI sees itself as a nexus of innovation and education on AI. Erik Brynjolfsson, a senior fellow at HAI, teaches an “AI Awakening” course that features guests like Eric Schmidt, the former CEO of Google, and Mustafa Suleyman, the CEO of Microsoft AI.

Last month, as HAI marked its fifth anniversary with a daylong event at the Palo Alto campus, tech pioneer Marc Andreessen told HAI co-director Fei-Fei Li that there’s “schizophrenia in the discussions in DC”—with demand for guardrails in domestic policies, but a muscular approach when fending off China in the AI arms race.

Andreessen, the co-founder and general partner at Andreessen Horowitz, a venture capital firm, said government absolutely has a role, and noted how it funded supercomputing centers and fostered development of the internet.

Congressional staffers at the Stanford boot camp in 2023.

Photo: Christine Baker

Stanford says it has helped European Parliament members understand AI in the lead-up to passage this year of the EU AI Act, the globe’s most comprehensive and far reaching AI regulation. And it’s held workshops for employees of the US General Services Administration, an agency that helps purchase goods and services for the government.

Stanford’s approach in educating congressional staff is simple, says Wald. “The goal is to teach them what the technology can and cannot do.”

Some at Stanford see a special need to foster greater understanding of technology in Congress, which in 1995 shut down its Office of Technology Assessment as part of Republican House Speaker Newt Gingrich’s small-government “Contract with America.”

“When Congress does not have an office of technical assessment, it is absolutely critical that the individuals who are going to be thinking about potential legislative proposals for this technology have an understanding of the technology,” said Daniel Ho, a law professor at Stanford, who has been part of the camp, teaching how different federal agencies use AI and machine learning.

“I’ve been in many of these conversations where either policymakers propose things that are technically impossible,” he said. “And technical folks propose things that are just flat out illegal.”

Government Test Cases

“How do you evaluate the harm?” said Mihir Kshirsagar, who is part of the Center for Information Technology Policy at Princeton. “How do you correct the bias? And who do you hold accountable?”

Staffers from the Department of Homeland Security and the Federal Trade Commission, and others, met at Princeton’s School of Public and International Affairs in Washington’s Dupont Circle last month to explore those questions. They were discussing a fictitious AI predictive tool using algorithms to decide whether a student should get a loan, the amount they’d receive, and their interest rate based on academic record, test scores, intended major, and other factors.

They discussed how the tool should be evaluated by regulators and audited. And explored a scenario where an academic publication finds that the tool created a high barrier for students from rural school districts and underserved urban school districts.

They also looked at test cases involving a chatbot to help military veterans with benefits, and an AI model with broad skills that could be tailored for specific use by agencies.

“Even though they had been exposed to some of these issues, they didn’t know then what to do next about it,” Kshirsagar said. “And so trying to see and really recognizing the limits of what people know and what they’re doing was a helpful part of the discussion.”

Other universities also have been consulting with Congress. Last month, members of the House financial services committee visited the Massachusetts Institute of Technology for briefings on AI and its risks in the financial sector.

Chevron Shock Waves

The US Supreme Court’s Loper Bright decision last month that overturned traditional Chevron courtroom deference to agency interpretations of complex laws could complicated the kind of legislation Congress ultimately writes on AI.

“The decision will require a greater level of specificity and intentionality for all legislation that my colleagues and I draft going forward, which is a good thing,” said Sen. Todd Young (R-Ind.), who is part of the AI caucus in the Senate.

That’s where universities might especially be able to help, said Rama Chellappa, a top artificial intelligence scientist at Hopkins. Academics, he said, could step in to give some core education on key topics relating to AI legislation, ethics, and what the models are capable of doing.

Chellappa, who has spent more than three decades as a researcher and innovator working on AI and machine learning, said Congress needs to understand that there’s no turning back.

“There are no more AI winters,” he said. “AI is going to be here and it’s dominant.”

Johns Hopkins receives funding from Bloomberg Philanthropies, the charitable organization founded by Michael Bloomberg. Bloomberg Law is operated by entities controlled by Michael Bloomberg.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird