Key players in the private sector artificial intelligence space came together at a Tuesday hearing to encourage further safety measures in the development of advanced generative artificial intelligence systems against the backdrop of limited regulatory guardrails.
The common mission of the hearing, conducted by the Senate Judiciary Subcommittee on Privacy, Technology and the Law, was to maximize the technology’s benefits and mitigate associated risks, without stifling potentially helpful innovation.
Samuel Altman, CEO of OpenAI—the company that developed the generative AI program ChatGPT—stated that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful [AI] models,” while also expressing optimism for his company’s AI technologies’ innovations in ongoing public health and climate change issues, as well as broad public enjoyment of AI systems.
“We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work, and we make significant efforts to ensure that safety is built into our systems at all levels,” he said.
Altman specified that the U.S. government could consider developing and implementing a combination of licensing and testing requirements that AI technologies must successfully meet prior to being released into the market.
One major recommendation Altman discussed before lawmakers was teaching AI models values as they are being developed. Reinforcement learning from human feedback—or RLHF—is a technical strategy for machine learning algorithms that power systems like ChatGPT to learn parameters of what AI technologies can operably do.
“Giving the models values upfront is…extremely important,” he said. “You’re saying ‘here are the values, here’s what I want you to reflect’ or ‘here are the wide bounds of everything that society will allow.’”
Altman said that this approach can be done with synthetic data or human-generated data, and is part of how OpenAI updated its forthcoming ChatGPT-4 to resist harmful user prompts. But making these changes mandatory within AI technologies calls for a larger framework that is tailored specifically to guiding AI development.
“We need to give policymakers and the world as a whole the tools to say, ‘here’s the values and implement them,’” he said.
Speaking with Sen. Lindsey Graham, R-S.C., Altman agreed that the most effective way to award production licenses for AI systems was for a “more nimble and smarter” government agency to be created.
This hypothetical agency would have major jurisdiction over the AI private sector, with Graham and Altman concurring it would be able to award and revoke production licenses and permissions.
Christina Montgomery, the chief privacy & trust officer at IBM and fellow witness, broadly concurred with the necessity for regulation from both the private and public sectors.
“Congress can mitigate the potential risks of AI without hindering innovation, but businesses also play a critical role in ensuring the responsible deployment of AI,” she said. “Companies active in developing or using AI must have strong internal governance, including, among other things, designating a lead AI ethics official responsible for an organization’s trustworthy AI strategy.”
Although she notably differed from Graham and Altman’s shared opinion that a new agency must be formed to oversee AI technology production, Montgomery said that licenses should broadly be required for new AI tool production, but supported a precision approach to regulate technologies in different contexts, such as election integrity and public health.
“We don’t want to slow down regulation to address real risks right now,” she said. “We have existing regulatory authorities in place who have been clear that they have the ability to regulate in their respective domains.”