We keep worrying about what AI might do. But maybe the better question is: what are we doing with AI?
Not long ago, a friend of mine, an associate at a small but busy law firm confided in me over a catch-up call. “I’ve been using ChatGPT to draft early versions of contracts and emails,” she said, tensely awaiting my response. “Just to get started faster. I haven’t told anyone at the firm. I don’t even know if I’m supposed to tell anyone.”
I told her she’s not alone, and I know of many associates that have started to use ChatGPT or other generative AI tools and platforms for first drafts, and that many law firms have even made paid generative AI tools and platforms available to staff, and encouraged staff to use these tools, but that it’s important to remember with tools such as ChatGPT that lawyers have certain duties vis-à-vis their clients, including maintaining confidentiality and ensuring the accuracy of their advice.
What stuck with me from this conversation was not that she’d been using ChatGPT, but that there are still lawyers who think they’re the select few using generative AI to increase efficiency, and that there remains a sense of guilt or secrecy around the use of generative AI, as if it’s lazy or frowned upon. Why is it, then, that despite the global push for generative AI adoption and use, lawyers remain hesitant to use the tools available to them?
In reality, lawyers across the profession are experimenting with generative AI, with some already taking full advantage of it. They’re drafting motions, summarizing depositions, even brainstorming arguments late at night with the help of a chatbot. And why wouldn’t they? These tools are powerful, fast, and, let’s admit it, kind of magical.
But here’s the part that should not be overlooked: generative AI should not be used without rules, training, or oversight. Not because lawyers are lazy or reckless, but because we haven’t caught up. The tech is sprinting ahead, while professional frameworks are still tying their shoelaces.
If doctors need training to use robotic surgical tools, and pilots need licenses to operate new aircraft systems, then why should legal professionals, whose decisions can affect lives, liberty, and livelihood, be any different?
We’ve spent the last year focused on how to regulate the AI itself. But maybe we’re so focused on one target, that we’re forgetting an important issue. Maybe the problem isn’t the tool. Maybe it’s the people using it.
What if the next phase of AI oversight didn’t start with the software, but with us?
Think about how we’ve handled every major technology shift in law. We didn’t regulate email. We taught lawyers how to use it professionally. We didn’t ban cloud storage. We introduced ethics opinions and best practices for client data.
Yet here we are with generative AI, arguably the most powerful legal tool since Westlaw, and we’ve got… silence. A few judicial standing orders. Some bar association think pieces. A quiet chorus of in-house memos that may or may not be followed.
And in the meantime, real risks are starting to surface. We’ve seen filings with hallucinated case law. Contracts based on misunderstood prompts. Confidential data fed into public models without anyone stopping to ask if it was okay.
Not because lawyers are acting in bad faith. But because no one has set the ground rules.
So here’s a thought: instead of panicking about what AI might do, maybe it’s time to focus on what lawyers should and can do. And more importantly, what they should know before using it.
Imagine a world where, before an attorney can use generative AI in client matters, they complete a short certification. Something lightweight, but meaningful. A walkthrough of how these models work. Their limitations. Their blind spots. Their tendency to sound confident while being completely wrong.
Picture a badge on your firm bio: “AI-Competent Counsel.” A signal that you’ve been trained not just to use these tools, but to use them responsibly. That you understand when to trust them and when to double check. That you know the difference between efficiency and ethical risk.
This isn’t about gatekeeping or slowing innovation. In fact, it’s the opposite. Clear standards free lawyers to innovate without fear. They tell clients, courts, and colleagues: we’re not guessing here. We’re doing this the right way.
Because let’s face it: transparency is coming, whether we like it or not. Clients will start asking how much of their work was AI-assisted. Courts will demand disclosures. Opposing counsel will raise questions. And when that day comes, “I didn’t know” won’t be a strong defense.
Of course, there’s nuance. Not every use of AI needs a permission slip. No one’s suggesting a ban on using ChatGPT to rephrase an email. But if you’re drafting legal arguments? Reviewing evidence? Making strategic decisions on behalf of a client? The bar needs to be higher. Literally and figuratively.
In the end, this isn’t a story about fear. It’s about trust.
Law has always been built on it. Clients trust us with their secrets. Judges trust us to be truthful. Opposing counsel trusts us to follow the rules, even in adversarial settings.
When AI becomes part of our toolkit – as it will, then it deserves the same level of care. Not because the machines are dangerous, but because we have a responsibility to use them wisely.