
By Electra Japonas, Law Insider.
Legal tech is growing up. AI tools can now do everything from automatic redlining and risk scoring to managing negotiation workflows and drafting entire contracts. The progress is real — and fast.
But the conversation is still centered on the tools, not the lawyers. Not the skills we need to build to extract real value. Not how our role as contract lawyers is changing. And definitely not how we need to think differently to stay relevant in what comes next.
That’s what I want to talk about.
Because while the spotlight has been firmly on the technology, the more profound shift is happening in the role of the lawyer. What we do, and how we do it, is being quietly, but fundamentally, redefined. And in my view, that’s the biggest part of this transformation — maybe even the biggest transformation we’re going to see in our careers — and it’s just not getting the attention it deserves.
In the new world of AI, lawyers are no longer just drafters, reviewers, or negotiators in the traditional sense. Those roles still matter, but they’re changing. Increasingly, we’re going to need to design systems, not just documents. To translate legal judgment into structured logic that technology can execute, consistently and at scale.
That means we’re becoming something more. We’re evolving from executors of legal work to designers of legal systems — architects of how that work gets done at scale. In that sense, this shift isn’t diminishing our role. It’s elevating it.
The Common Narrative: AI Will Replace Lawyers. The Reality: It’s Forcing Us to Become Better Ones
There’s a lot of talk right now about AI replacing lawyers. That contract review will be fully automated. That junior lawyers will be redundant. That prompts will replace judgment.
But I don’t see that happening — not in the way it’s being framed. What I see is a profession being pushed to level up.
AI isn’t removing the need for lawyers. It’s removing the luxury of working from unexamined instincts. It’s forcing us to do something we haven’t always had to do: explain ourselves.
Take liability caps. Many lawyers instinctively push for a 12-month fee cap simply because that’s what they’ve always done. But when you’re building a playbook for an AI tool or even training your team, you need to explain why. Why 12 months? Why not six? What makes this reasonable for this deal, this client, this industry?
Or consider indemnities. Maybe you’ve always excluded consequential loss. But now you have to decide — do you apply that across all jurisdictions? What about in contracts for regulated services or high-risk data environments? What’s your fallback position? What’s non-negotiable?
These aren’t just academic exercises. They’re the building blocks of prompt engineering. And prompt engineering is where your legal logic becomes operational.
We’re no longer just applying judgment — we’re being asked to structure it, document it, and scale it. That’s hard. It forces us to confront habits we’ve relied on for years. But it also sharpens us.
Because once you turn instinct into structured reasoning, once you clarify your positions, test your assumptions, and build prompts that reflect how you actually think — you become more consistent, more transferable, more valuable.
You become a better lawyer. Not a faster one. Not a cheaper one. A more deliberate, more scalable, and more intentional one.
And that’s the shift I find most exciting. Not because it’s powered by AI — but because it’s powered by us finally thinking more critically about the craft we’ve practiced for so long.
But this evolution isn’t happening in a vacuum.
It’s happening through the very tools that some see as a threat to our role. These tools aren’t replacing legal thinking — they’re demanding more of it. And they’re only as powerful as the logic we give them.
Which brings us to the heart of it: how these tools actually work, and why prompting matters so much.
So, What Is Prompt Engineering?
Prompt engineering is the process of translating legal judgment into structured instructions a machine can follow. It’s not coding. It’s not deeply technical. It’s simply a more intentional way of doing what lawyers already do: assess risk, apply judgment, and think in frameworks.
The shift is this: instead of drafting a clause for a counterparty to interpret, you’re drafting a rule for a system to execute.
Think about the last time you reviewed a contract and flagged a one-sided indemnity. You didn’t just react on instinct — you had internal reasoning, past experience, a mental checklist, and a fallback ready. Prompt engineering asks you to turn that mental process into something the AI can apply to every contract that comes after.
In other words, you’re not just reviewing contracts anymore. You’re designing how contract review gets done.
And once you do that, the benefits start to compound.
You stop re-explaining your reasoning. You reduce inconsistencies across teams and deals. You accelerate onboarding. You create systems that deliver not just quality legal thinking, but repeatable legal thinking. Prompt engineering becomes a force multiplier. It takes your expertise and turns it into infrastructure. It’s the difference between solving a problem once and building a process that solves it every time.
And that, in my view, is one of the most exciting shifts we’re seeing — not just in legal tech, but in the evolution of legal practice itself.
How These Tools Actually Work and Why Prompting Matters
Until recently, most contract playbooks lived in our heads — or maybe in a Word doc or spreadsheet if we were being diligent. You knew what to look for in a clause, where to push back, and what fell within your organization’s or client’s risk tolerance. But that logic wasn’t always written down, let alone applied consistently across teams or contracts.
That’s now changing.
With AI contract review tools, that logic becomes structured, visible, and scalable. You start by creating a set of rules (commonly referred to as an AI playbook) that reflects your approach to contract review. These rules might cover acceptable liability caps, missing clauses, or fallback language on key terms.
Once defined, the AI applies those rules at scale. When you open a contract, the tool reads it line by line, compares it against your playbook, and flags anything that falls outside your standards. Where issues are clear-cut, the AI suggests redlines. Where the issue requires more judgment, it flags the clause for your review.
Let’s say your playbook includes a rule that liability for data breaches must be capped. If the clause is uncapped or missing, the AI flags it. If a governing law clause is absent, it inserts your standard. If a termination clause is one-sided, it suggests a commercially balanced alternative.
This isn’t magic — it’s structure. But it only works if your prompts are clear, deliberate, and grounded in sound legal reasoning. The AI is only as good as the thinking behind it. And that thinking is yours.
If you’re ready to put this into practice, we’ve built our AI Contract Review tool directly inside Microsoft Word — making it easier than ever to structure your thinking, apply your playbook, and deliver consistent, scalable contract review without leaving the platform you already use every day.
Where This Is All Going
Prompt engineering is not just a new task on a lawyer’s to-do list. It’s a signal of a much deeper shift in the profession. One that changes not just how we work, but how we think about the work itself.
Throughout this piece, I’ve tried to surface what’s often overlooked in legal tech conversations: the evolution of the lawyer, not just the innovation of the tool.
We’ve spent the last few years getting used to legal AI. We’ve learned that it can spot patterns, surface inconsistencies, and automate basic tasks. But the real transformation isn’t in what AI can do — it’s in what it demands from us to do it well.
It demands structure. It demands clarity. It demands that we explain the “why” behind our legal judgment, something many of us have never had to do with this level of rigor before.
For a long time, legal work was done on instinct. You learned on the job, absorbed your manager’s preferences, and developed your own approach over time. You reviewed a contract and “just knew” what was off. That instinct is valuable, but it’s also invisible. And when it’s invisible, it can’t be scaled, shared, or embedded into a system.
Prompt engineering — and AI in contracting more broadly — forces us to make that instinct explicit. To document the reasoning behind the redline. To define what good looks like, not just for ourselves, but for our tools and our teams. In doing so, it doesn’t replace legal work. It elevates it.
When you build a playbook and write prompts that reflect your judgment, you’re no longer applying legal skill in isolation. You’re creating a framework that applies that skill consistently, across every contract and every matter. You’re no longer relying on memory, availability, or bandwidth — you’re building infrastructure for legal thinking.
And that changes what it means to be a lawyer.
It moves us from being reactive to being proactive, from being task-driven to being system-driven, from being interpreters of contracts to being architects of contract review itself. That’s a seismic shift, not just in workflow, but in mindset.
So where is this all going?
We’re heading into a future where legal judgment will still be central — but it won’t be enough to simply ‘know what’s right’. You’ll need to be able to explain it. Apply it. Scale it. Systematize it.
And that’s not a loss. That’s a gain.
We are not becoming less human as lawyers. We are becoming more structured, more intentional, more disciplined — and ultimately, more valuable.
AI isn’t the end of lawyering. It’s the beginning of a new era. One where clarity is currency, structure is strategy, and our ability to design legal systems will be just as important as our ability to draft great clauses.
This is where we’re headed. And if you ask me, it’s one of the most exciting evolutions our profession has ever seen.
You can find more about Law Insider and its AI contract review tool here.
—
About the author: Electra Japonas is the Chief Legal Officer at Law Insider, previously she co-founded the pioneering contract standards project oneNDA and The Law Boutique group.
[ This is a sponsored thought leadership article for Artificial Lawyer by Law Insider. ]