As advanced machines take on more human tasks, from writing essays and screenplays to driving cars and manning military drones, Oak Ridge National Laboratory has created a new center that will provide objective research into the threats and opportunities around national security posed by artificial intelligence.
The Center for AI Security Research, or CAISER, is a partnership with the U.S. Air Force and the Department of Homeland Security that will initially focus on four national security areas where ORNL has particular strength, the lab said in a press release Sept. 27:
-
Cybersecurity, where AI is used to protect U.S. government and industry data from outside attack
-
Biometrics, where AI can recognize faces and fingerprints
-
Geospatial intelligence, where AI can quickly analyze images of war zones and climate change
-
Nuclear nonproliferation, where AI can detect nuclear weapons and materials
In each area, AI can be used for good or for bad. While it can keep an online system secure using facial recognition, AI is increasingly used to generate copycat images and videos called “deepfakes” that are difficult to distinguish from real content. AI systems can also be vulnerable to attacks and inconsistencies.
“We are at a crossroads. AI tools and AI-based technologies are inherently vulnerable and exploitable, which can lead to unforeseen consequences,” said Edmon Begoli, founding director of CAISER and ORNL’s Advanced Intelligent Systems section head. “We’re defining a new field of AI security research and committing to intensive research and development of mitigating strategies and solutions against emerging AI risks.”
CAISER is the lab’s first dedicated research center to analyze security risks posed by AI, though it extends ORNL’s longtime research project on Artificial Intelligence for Science and National Security.
Here’s how the center will address some of the nation’s most pressing security questions and make ORNL a national center of AI research as the lab celebrates 80 years.
ORNL AI center will partner with military, government to secure US
ORNL will partner with the Air Force Research Laboratory and the Department of Homeland Security on AI research at CAISER, and expects to collaborate with more industry and national security partners. The center will produce research reports and develop methods of testing AI tools and products.
Dmitri Kusnezov, DHS Under Secretary for Science and Technology and a renowned theoretical physicist, said Homeland Security had a “special partnership” with the Department of Energy’s national labs, of which ORNL is the largest.
“I think a lot about the challenges of our current era, as well as those that lie ahead in the uncharted territory of AI technologies and the very real threats that we’re working steadfast to understand and mitigate,” Kusnezov said. “CAISER will play a critical role in helping us understand this future and addressing the looming threats together.”
In the announcement, the lab noted several key vulnerabilities of AI systems to consider. U.S. adversaries can destroy AI models, for instance, by injecting “poisonous” data that changes a system’s output and can corrupt the way a machine learns. Research has also indicated that AI systems behind self-driving cars and taxis can be thrown off by something as simple as black tape on a stop sign, the lab said.
The promises of AI are as big as its perils and range from consumer goods to top secret government data, so CAISER will prepare educational programming for the public, lawmakers and military personnel, who the center aims to make more confident about which AI systems are trustworthy.
“Artificial Intelligence promises to do many wonderful things for nearly every aspect of society,” said Col. Fred Garcia, director of the Air Force Research Laboratory’s Information Directorate. “CAISER gives hope that while the world rushes full force into AI implementation, they can rest assured that vulnerabilities are being studied and that the back door is being guarded.”
CAISER is part of the lab’s National Security Sciences Directorate, and the center’s team currently includes 10 researchers, according to its webpage. Several team members are specialists in machine learning, a type of AI that trains algorithms with data so that machines used in everything from manufacturing to voice assistants like Siri and Alexa get better at their jobs as they work.
Oak Ridge National Laboratory has 80 years of national security expertise
ORNL’s history goes back to 1943, when the Graphite Reactor site, known then as X-10, first demonstrated that plutonium could be extracted from uranium in a chain reaction. That discovery was critical to the success of the Manhattan Project.
Eighty years later, the lab says it is taking “a logical step” in its history by extending its national security focus into the sometimes frightening rise of AI.
“We’re very proud of the laboratory’s legacy of scientific discovery in nuclear energy, biological sciences, high-performance computing, materials research and artificial intelligence,” said Moe Khaleel, associated laboratory director for National Security Sciences at ORNL. “CAISER will approach the AI challenge in the same way, developing capabilities to scientifically observe, analyze and evaluate AI models in support of national needs.”
Daniel Dassow is a reporting intern focusing on trending and business news. Phone 423-637-0878. Email daniel.dassow@knoxnews.com.
Support strong local journalism by subscribing at knoxnews.com/subscribe.
This article originally appeared on Knoxville News Sentinel: Oak Ridge National Lab to study US artificial intelligence risks