The framework addresses ‘hallucinations’ and the risk of bias in generative AI outputs
[SINGAPORE] Lawyers using generative artificial intelligence (AI) in their work remain ultimately accountable for all work produced as part of their professional duties to clients, the Ministry of Law (MinLaw) said in a new guide published on Friday (Mar 6).
While AI offers powerful capabilities to assist legal work, the technology comes with inherent limitations, and professional responsibility must remain with lawyers who then apply their expertise to guide and validate AI-generated outputs, said the guide.
“The use of gen AI tools does not delegate or diminish these obligations.”
The “Guide for Using Generative AI in the Legal Sector”, launched at a MinLaw event at the Raffles City Convention Centre on Friday, is the first such framework setting out ethical AI use in the legal profession. It covers anyone handling legal work in Singapore, including lawyers in private practice, in-house counsel, paralegals and law students.
The guide sets out three core principles which, while not legislated, should be heeded by legal professionals when using AI in their work: professional ethics, confidentiality and transparency.
It also lays out a five-step implementation framework for law practices and legal teams, from developing an AI governance policy and assessing workflow needs, to tool evaluation and staff training.
Navigate Asia in
a new global order
Get the insights delivered to your inbox.
Calling AI the “biggest disruptive force to the legal profession”, Minister for Law Edwin Tong said the government will go “all out” to support firms in adopting this technology.
“We are prepared to invest alongside you as you make the change, and we will put meaningful support on the table to support the legal industry to make this change,” he said.
When conceiving this guide, the government chose not to hard-code it at this time so as to not stifle innovation, said Tong: “I don’t think it is the right time, because we are far from the end of the journey when it comes to the evolution of technology.”
SEE ALSO
There will also be targeted support for smaller firms, which face the challenge of economies of scale in AI adoption exercises.
The ministry’s Legal Innovation and Future-Readiness Transformation initiative will drive these efforts, by helping firms to analyse their needs and understand which product best suits them.
“I believe that AI will not replace the human lawyer, at least not in the foreseeable future, but the human who adopts and uses AI better will replace the human who does not,” said Tong.
Hallucination and bias
A key concern tackled by the guide is the risk of AI hallucinations – outputs which are incorrect or fictitious.
While such errors cannot be fully eliminated, their likelihood can be reduced by, for instance, feeding the AI specific reference documents to anchor its responses, rather than letting it draw freely from its training data.
The guide also warns of bias risks stemming from training data that may reflect historical prejudices, unrepresentative samples or algorithmic design choices that could skew legal reasoning.
Legal professionals are thus advised to test AI outputs across different case types and client groups, and to ask the AI to explain the reasoning behind its recommendations.
The guide also recommends matching the level of human oversight to the stakes involved.
For instance, for high-risk work such as court submissions and legal advice, a lawyer must review and sign off on AI-generated work before it is used.
For more routine work, such as client updates and meeting notes, sampling checks may be sufficient.
Confidentiality and transparency
A lawyer’s duty to protect client information extends to gen AI tools, the guide said. It advised firms to prefer enterprise-level tools over free public platforms for sensitive data, and to secure vendor commitments prohibiting the use of client data for AI model training.
Where free tools are used, lawyers should anonymise data and double-check that data-retention settings are disabled.
On transparency, lawyers should disclose their use of gen AI to their clients when the technology is used substantially to come up with a work product; they should also do so when it affects the cost of legal services, or when a tool’s data-handling practices could conflict with client preferences.
Clients should also be offered the option to opt out of AI use.
Implementing AI
The guide also offers step-by-step recommendations on implementing AI use.
Firms should first establish a clear governance policy, that is, they should set out the tools approved for use, the types of data which can be fed into them, and the persons responsible for oversight.
Firms should also put in place protocols for communicating their AI practices to clients, and the procedures for reporting errors or data breaches.
Next, firms should assess where gen AI can add the most value to existing workflows, weighing the risk and feasibility of each use case.
The third step is to select the right tool. The guide recommends that firms conduct thorough due diligence on vendors, examining their data security measures, how they handle client data, and whether their tools have been tested for accuracy and reliability in legal contexts.
Firms should start with basic AI tools, such as Microsoft Co-pilot and LawNet AI, before expanding to commercial off-the-shelf legal AI products. The most advanced firms may eventually develop customised in-house AI solutions for their specific needs.
Once a tool is chosen, firms should roll it out in stages, starting with a pilot group, gathering feedback, and refining prompts and workflows before wider deployment.
Finally, firms should regularly review whether gen AI tools are still meeting their needs, staying current with new developments and updating internal policies accordingly.
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
