Jeffrey Saviano, who leads the Emerging Technology Strategy and Governance Practice at EY consulting speaking at GovernmentDX.
Photos: Taylor Mickal Photography
Two millennia ago, Hippocrates devised an oath to bind practitioners of the emerging practice of medicine. Now a modern equivalent is required to address the equally transformational arrival of AI, top digital leaders heard at a Washington DC roundtable
“I think we need a Hippocratic Oath for AI: ‘First, do no harm’,” said Jeffrey Saviano. “We’re missing a layer of fundamental human rights, of personal privacy; some of this is codified, but some of it isn’t. We need new, ethics-based frameworks to guide AI decision-making.”
Saviano is a leading US expert on the ethics of AI, with teaching or research roles at Harvard, MIT and The Boston University School of Law, and leads the Emerging Technology Strategy and Governance Practice at Government Service Delivery knowledge partner EY Consulting.
As modern medicine first emerged in ancient Greece, more than 2000 years ago, the philosopher and doctor Hippocrates devised a vow to bind its practitioners within an ethical code. The development of artificial intelligence presents a fresh set of moral challenges, and Saviano’s audience – around 40 senior digital leaders from US departments and overseas governments – were the successors to Hippocrates’ medical peers, practising an emerging discipline with huge significance for the wellbeing of future generations.
How should public sector IT chiefs handle this revolutionary new technology, and how should governments regulate its use in the private sector? At a roundtable debate held at Government DX (now renamed Government Service Delivery) in Washington DC, the group looked for answers to these pressing questions.
A pledge for practitioners
In Saviano’s view, the Hippocratic Oath model – which puts the responsibility for ethics on practitioners as well as government – is a sensible one. “If we’re waiting for a uniform, global heavy hand of government regulation of emerging technology, history tells us that day will probably never come,” he said. “There’s a technology governance gap in the world, and we need a greater contribution from the business community.”
Policymakers tend to focus on AI development at a handful of big tech firms, he argued, but “we have to look at the breadth of organisations developing and launching AI systems”, expecting every one of them to “step up and take some of that burden of responsibility”. In this nascent but fast-expanding industry, Saviano added, fewer than 5,000 corporate board directors in Fortune 500 enterprises “have oversight responsibilities impacting a good chunk of the world’s AI development. If you could educate them for just a day about the important benefits and burdens of this powerful technology, then you could change the world.”
Other digital leaders questioned Saviano’s approach. “I’m not sure there’s a lot, in the last two decades of big tech and social media firms, that suggests that tech businesses are going to regulate themselves for a better world unless they need to,” commented one senior overseas leader. Another made the point that “social media companies very actively use data to promote harmful engagements online: they prioritise things that cause conflict and distress on purpose. It feels like we’re just hoping that won’t happen with AI.” The technology is already being used to create “fake pornography” as a weapon to abuse female politicians, said a third, noting that: “Ninety percent of the women parliamentarians in Africa have exited social media because it’s so violent, with these attacks.”
Read more in this series: A problem shared: how governments are tackling cyber threats
The role of regulation
“There’s certainly a role for regulation,” agreed Saviano, praising the risk-based approach adopted in the EU’s AI Act. The EU’s General Data Protection Regulation (GDPR) rules have been influential around the world, he said, and “there’s a possibility that the EU AI Act could have that same global trajectory and impact. I think it’s also a question of enforcement.” It will become clear over the next few years, he added, whether the Act has sufficient reach and bite to address the challenge – “but if it achieves half of what GDPR did, in terms of uniform, modern, citizen-centric approach to regulation, then we’d be in a better place”.
Saviano (left) presented a new, pyramid-shaped applied AI ethics framework, developed by his Harvard research team, setting out four levels of ethical AI actions for enterprises. The bottom layer represents actions to comply with non-AI laws and regulations that are nonetheless relevant to AI systems, such as GDPR. The second contains compliance with the “thin layer of AI regulation that exists in the world today”. The third includes corporate acts or initiatives that are not only good for business but also benefit society, such as removing bias and ensuring transparency in the use of AI. Finally, the pinnacle of the pyramid represents actions where “there may not be a return for the enterprise, but there’s a return for the world”.
The application of these frameworks should be overseen by strong governance systems, featuring AI experts and senior organisational leaders. In the US Department of State, said its chief data scientist and responsible AI official Dr Giorleny Altamirano Rayo, an AI steering committee reports to an enterprise data and artificial intelligence council, which itself feeds into an enterprise governance board.
AI in action
This architecture is required to avoid some of the risks that attend AI, said Rayo, enabling the State Department to safely explore the technologies’ capabilities. It is, for example, using AI to examine government records before their release under rules that automatically declassify documents after 25 years. All this data must be examined and approved before publication. “Back in the day, the process was manual: you had a whole bunch of people reading emails and documents day in, day out,” she said. “We grabbed an open source model, and trained it to tag data as ‘clearly declassifiable’, ‘not sure’ or ‘classifiable’.”
The model can now “classify correctly 98% of the time, which saves human reviewers 60% of their time – so there’s a 60% efficiency gain”, Rayo added. “Using AI in a smart way, with humans in the loop, allows us to get at burdensome tasks that we have to undertake.”
Chang Sau Sheong, deputy chief executive of the Government Technology Agency of Singapore – one of the world’s most advanced governments in deploying civilian AI – highlighted some of the other ways in which these technologies can improve services and operational efficiency. Singapore is, for example, using AI to find the information required to answer parliamentary questions. Previously, “on average we spent 10 hours answering each question”, he said. “With this, you can do it in 10 minutes.” Citizens too are having their queries answered by AI: in sensitive areas, Chang explained, the system averts the risk of hallucination by drawing on “pre-packaged, human-verified packets of data” rather than generating answers from scratch.
The country has also advanced chatbots, and has developed some into brand new public services: one, Chang (left) said, provides personalised career advice to people considering a change of profession. And his agency has trained AI systems to tackle the phishing sites that seek to defraud citizens. “Singapore is quite badly targeted by scams,” he said. “We look for sites that are potentially problematic, and flag them up for people to look at.”
Read more in this series: The wrong kind of inheritance: how to replace legacy technology in government
From GenAI to Gen Z
Asked how the government has encouraged and supported Singaporean officials to make use of these tools, Chang replied that “there’s a lot of work in educating public officers. We hold bootcamps and workshops, we do competitions, and we do general education to explain to everyone what AI is – getting them comfortable with the idea.”
Singapore has also created a large and highly-skilled workforce of AI specialists within Chang’s agency, creating capabilities that are the envy of most other governments. “We need technical product managers, software engineers, data scientists,” said one US departmental CIO. “AI-enabling talent is critical to being able to move any of this forward.”
Recruiting for and developing these skills is a task for organisations’ HR directors, commented Saviano, noting that they should help “create a rich AI environment to attract ‘Gen Z’ AI specialists”. When organisations develop a reputation for standing at the cutting edge of AI deployment, he added, talent flows in: “You’ve got to find a way to attract hearts and minds.”
The other key to encouraging the adoption of AI by the workforce, said one digital leader, is to give people accessible, effective AI tools. “If they’re useful to them, they get hooked and move onto other tools,” they commented. Central digital units can provide tools for use across the civil service – and this, commented one US departmental CIO, can also help address the problem of skills shortages. “What are the applications that aren’t so specific, that we can work on cross-government?”, they said. “I fear we’re not going to have the resources to hire everybody we need across the agencies yet.”
The power of sharing
Such tools can be shared internationally as well as intra-governmentally, commented Saviano. He cited Estonia’s X-Road data exchange platform and the global DHIS2 health information management system as examples of “digital public goods that are being used by multiple nations, and achieving multiple benefits. Imagine if there was an AI marketplace that we could draw upon: a global digital public good marketplace as a source of information, data, technology systems.”
Meanwhile, said Chang, good progress can be made by providing training, tools and support, enabling people to experiment safely with these emerging technologies – gaining confidence and finding their way around obstacles. “It’s the start of this AI age,” he said. “We want to go for the low-hanging fruit and solve the problems, so that we can take the next step along the road.”
In making that journey, concluded Saviano, “there is an important role for forums like this one, where we have the private sector and public sector coming together”. Involving senior leaders from government departments and central digital teams, the public and private sectors, and countries around the world, Government Service Delivery’s discussions included a wide range of contrasting perspectives and experiences. “There’s a long line of research that shows that if you want to innovate, the best innovation comes from the most diverse teams,” said Saviano. “Days like today are an important step.”
The invitation-only Meeting at Government Service Delivery is a private event, providing a safe space at which civil service leaders can debate the challenges they face in common. We publish these reports to share some of their thinking with our readers: note that, to ensure that participants feel able to speak freely at the meeting, we give all those quoted the right to review their comments before publication.
The 2024 meeting will be covered in four reports, covering the four daytime sessions:
– Seamless by design: the barriers to overhauling legacy technology in government – and how they can be overcome
– The tip of the arrow: how cybersecurity can help drive government transformation
– AI in government – how, where and why?
– What you need when you need it: the power of user-centred design
For information on the 2025 Government Service Delivery Conference and Meeting, which will be held on May 13-14, visit our dedicated website.
Sign up: The Global Government Forum newsletter provides the latest news, interviews and features on AI, data, workforce, and sustainability in government