AI Made Friendly HERE

Building AI With a Conscience – Sponsor Content


From Capitol Hill to the forefront of AI research, Daniela Amodei’s journey is reshaping the AI industry.

With Daniela Amodei, president and co-founder of Anthropic

How do you create an ethical product in a field where the very definition of ethical is changing by the moment, the legal rules are still being written, and the tech itself is evolving at brain-breaking speeds?

This question motivated siblings Daniela and Dario Amodei to co-found Anthropic, an AI company devoted to safety and research that just so happens to also be building some of the most powerful large language models (LLMs) and enlisting some of the world’s biggest companies as partners.

“I started my career in international development, working on issues like poverty assessment, conflict mitigation, and global health,” Daniela Amodei says. Her diverse experiences ranged from political campaigns on Capitol Hill to leading teams across various sectors at startups like Stripe and OpenAI. It was her co-founder and brother, Dario, with his background in neuroscience and computational biology, who initially exposed her to the field of AI.

The Amodeis and a few of their earliest Anthropic colleagues previously worked at OpenAI, the company behind ChatGPT. But the question “How do you ensure a safe AI future?” motivated them to strike out on their own. In a recent story in The New York Times, writer Kevin Roose reported that Anthropic staff were fearful of the damage future AI could do: “Some compared themselves to modern-day Robert Oppenheimers, weighing moral choices about powerful new technology that could profoundly alter the course of history.”

This is an incredible amount of weight to carry around on a day-to-day level. So how does one create an ethical AI product and ensure that this power is used for good? The answer at Anthropic is to build a safe AI company and, with it, a safe AI. The company is doing that by creating standards that guide its own actions as a business and a constitution that trains its LLM, known as Claude.

External engagement on these issues is central to our work. We think developing AI safely is a much broader project than Anthropic can—or should—tackle alone.Daniela Amodei president and co-founder of Anthropic

As for the business itself, Anthropic is a Public Benefit Corporation, a designation that requires it to prioritize social impact and stakeholder accountability—not just profits. The company also published a transparent, extensive document outlining its governance structure called “The Long-Term Benefit Trust,” which empowers a panel of five “financially disinterested” experts to oversee and, if necessary, remove members of its executive board. Essentially, Anthropic has built-in guardrails.

“We want the transition to more powerful AI systems to be positive to society and the broader economy. This is why much of our research is focused on exploring ways to better understand the systems we are developing, mitigate risks, and develop AI systems that are steerable, interpretable, and safe,” Amodei says.

This kind of thinking informs how Anthropic builds safety into its AI models. Anthropic employs a training technique that has come to be known as constitutional AI, in which it uses a written constitution, rather than subjective human feedback, to teach values and limits to its models and train them for harmlessness. The result is that compared to other popular LLMs, Claude is much more reticent about performing certain tasks. An AI model can’t be self-conscious, per se. But Claude’s training can give it an almost sheepish voice at times.

“I don’t have subjective feelings or emotions as an AI system,” Claude said in an interview. “However, I was created by Anthropic to be helpful, harmless, and honest.”

Those three words—helpful, harmless, and honest— appear repeatedly whenever Claude is prompted to the limits of its learned principles. And although Claude declines to speak about its training (“I apologize, but I do not actually have detailed insight into my own training process or ‘constitution’”), Anthropic says its constitution is a constantly-evolving document that draws from a wide range of sources, including the UN Universal Declaration of Human Rights and Apple’s terms of service.

“Fostering a better understanding of the technology will be crucial to ensuring the industry as a whole is developed safely and responsibly,” Amodei says. “This not only applies to the general public, but to policymakers and civil society, too.”

Part of the reason for this constitutional training approach is that AI trained by AI is easier to scale. And scale is also one of Anthropic’s stated goals. To test whether the principles of constitutional AI hold up, it is necessary to develop increasingly powerful models—and the primary way that happens is by scaling. But this requires increasing both the amount of users whose queries can teach the model and the amount of computational power behind it.

The pursuit of AI at scale raises other ethical questions: There’s the environmental cost of all that computational power; there’s the necessary involvement of one of a small handful of tech companies that even have access to that power; and there’s the potential, as the user base increases, for bad human actors to try to subvert the model’s trained principles and use it for some nefarious purpose.

But these questions are inherent to AI regardless of who is building it, and Anthropic, of course, is just one of many companies creating powerful LLMs.

“External engagement on these issues is central to our work. We think developing AI safely is a much broader project than Anthropic can—or should—tackle alone,” Amodei emphasizes. “Our hope is that by being transparent about the risks we’re seeing, we’ll be able to motivate a much broader effort into exploring potential solutions.”

If only people who don’t care about ethics train AI models, then AI models will be amoral at best. Anthropic’s belief is that we can’t make AI safe in the present unless we develop safe AI. And we can’t make it safe in the future, at the frontier of technology, unless we reach that frontier ourselves.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird