
Generative AI is reshaping the legal industry. AI deployment has changed workflows by streamlining research, drafting, and due diligence while raising important questions around ethics, bias, and the future of legal education. “Students need to be trained to take up jobs at firms that are keeping up with technological advancements. They need to be taught practical use of AI Tools, like training in legal research, drafting, and knowledge management using AI platforms. They need to sharpen their critical assessment skills to verify AI-generated content, check legal citations, and ensure jurisdictional relevance”, says Naval Satarawala Chopra, Partner, SAM.
Mr. Chopra says that AI will provide space to lawyers to focus on complex, analytical, and strategic aspects of their work. “Some routine roles may evolve or diminish, but new opportunities will emerge for those who can harness technology responsibly and creatively. The future legal professional will need to be both technologically adept and deeply grounded in the principles of law and ethics”, he says.
AI in curriculum
For the introduction of AI in law curriculum, institutes need to provide students with practical, hands-on exposure to AI tools. “Students should be taught not only how to use such technology but also how to critically assess its outputs, verify legal references, and understand the limitations and risks associated with AI-generated content. Embedding these skills early will ensure that graduates are well-prepared for the evolving demands of modern legal practice”, says Mr. Chopra.
Mr. Chopra also highlights some other important components to be inculcated in law schools, such as data security and privacy. He says students should be taught the ethical and legal obligations around client data and confidentiality when using AI. They also need to explore AI ethics and bias, including algorithmic bias, fairness, and the responsible use of technology.
Students adapting to technology while preserving the core principles of legal education requires a balanced approach. It will help to use real-world scenarios to help students understand both the potential and the limitations of AI in legal practice. “Students must be encouraged to embrace innovation, but also to maintain the analytical rigour, ethical standards, and critical thinking that define the legal profession. Students should understand that technology is taught as a tool to enhance, not replace, foundational legal skills”, says Mr. Chopra.
AI deployment at law firms
Talking about the benefits of the deployment of AI, Mr. Chopra says it can help streamline routine tasks such as contract drafting, document review, and legal research, allowing lawyers to focus on higher-value, strategic work. AI assists in maintaining consistency across documents and helps reduce human error in repetitive tasks.
Shardul Amarchand Mangaldas & Co. (SAM) recently announced a firmwide partnership with Harvey, a generative AI platform designed specifically for legal professionals. SAM has implemented Harvey’s full suite of AI functionalities across all seven of its offices after a pilot project.
Harvey is a Generative AI platform valued at over USD 3 billion, and its shareholders include OpenAI, Sequoia, and Lexis Nexis. It is already being used in the legal industry globally by many, including A&O Shearman, Cravath, Mori Hamada, Gleiss Lutz, and Clifford Chance, as well as other companies, such as KKR and PwC.
At SAM, this integration of Harvey’s large language model technology into daily practice aims to accelerate contract drafting and review, streamlining due diligence processes, enhancing legal research and predictive analysis, and delivering sharper, data-driven insights for both contentious and advisory matters.
To ensure the responsible and effective use of Harvey, SAM implemented a training Program for all the employees. The training covered practical aspects such as prompt engineering, best practices for legal research and drafting with AI, and the importance of data security and confidentiality. The program also emphasised the critical role of human oversight as every AI-generated output is to be subjected to thorough review by qualified lawyers before being incorporated into client work.
Mr. Chopra says the firm has implemented some governance protocols to ensure the responsible and secure use of AI. These protocols include: Mandatory human review, which means all AI-generated drafts, research, and summaries must be reviewed and verified by lawyers before being used in client work. The protocols also stress data security and confidentiality. Only firm-approved and licensed AI platforms are used, and sensitive client data is never input into AI tools unless strict security standards are met (such as ISO 27001 and SOC 2 Type 2 certification). The use of personal or unlicensed AI accounts for client work is strictly prohibited.
The protocols also mandate that all AI usage is logged and monitored to ensure compliance with internal policies, data privacy laws, and client-specific requirements. For bias and accuracy checks, lawyers are trained to identify and mitigate potential biases or inaccuracies in AI outputs, including the risk of ‘hallucinated’ legal citations.
Published – June 18, 2025 06:39 pm IST