In a time where technology is continually reshaping the legal landscape, an array of top-tier American law firms—known in industry parlance as the Am Law 200—have decided to draw the line at employing generative AI systems like ChatGPT in servicing their clients. Such decisions are rooted in concerns surrounding data security and the system’s propensity to generate misleading outputs, often referred to as “hallucinations.”
However, a disdain for ChatGPT doesn’t equate to an aversion to AI. The legal profession is no stranger to the remarkable efficiency that artificial intelligence systems bring to the table, especially in handling colossal amounts of information. Consequently, several firms are experimenting with AI platforms such as Harvey and CoCounsel, designed to address the inadequacies of previous technologies.
Kate Orr, the global head of practice innovation at Orrick Herrington & Sutcliffe—one of the first adopters of Harvey—stated, “We are asking lawyers to get comfortable with [generative AI] in their day-to-day work so, as we see more functions in this tool in this space, we are not playing catch-up.” At Orrick, generative AI has been earmarked primarily for non-legal tasks, including email drafting, to familiarize team members with the technology and its capabilities.
Unfortunately, public sentiment towards generative AI in law practice has been soured by incidents involving data breaches and disciplinary actions against lawyers who’ve submitted AI-drafted briefs. But, Orr argues, these reactions stem from a fundamental misunderstanding of AI technology, highlighting the importance of user education.
Contrasting this approach, BakerHostetler’s general counsel has issued directives barring attorneys from entering client information into any generative AI model, regardless of the data’s sensitivity. Meanwhile, the firm is working diligently to secure its own AI model through Azure, a Microsoft product, according to Katherine Lowry, the chief information officer at BakerHostetler. This careful, security-focused approach to AI implementation indicates that the firm sees the technology’s potential, despite current limitations and concerns.
Similarly, Michael Best & Friedrich went as far as to cut off ChatGPT access for all users of the firm’s network in July. Jason Schultz, the firm’s chief innovation and technology officer, said in an interview, “There have been plenty of examples where users went out and made a mistake.” He continued “We don’t want data making its way into the public domain or training models.“
Other big-name law firms, including Ballard Spahr and Blank Rome, have adopted policies that ban the use of generative AI in legal work until such tools have been vetted for both security and effectiveness. Furthermore, they have established task forces responsible for the development of safe and strategic AI usage guidelines.
However, even with these precautions in place, AI usage in legal work comes with its inherent limitations. According to various firm leaders, AI-generated legal briefs at best serve as first drafts that require human oversight for validating the accuracy of citations. Furthermore, human intervention is still needed to distinguish what’s protected under attorney-client privilege.
Not all law firms are pulling away from generative AI. Allen & Overy, a global London-based firm, has embraced the technology, partnering with Harvey AI, a platform derived from ChatGPT. A spokesperson for the firm noted that Harvey was a ‘starting point‘ for lawyers, helping them to be more efficient and better serve their clients.
While the adoption of generative AI in the legal sector isn’t widespread and remains in the exploratory stages, larger firms are leading the pack. A recent survey by Thomson Reuters found that only 3% of law firm employees stated that they were using generative AI or ChatGPT, while an additional 2% said they were actively planning to implement it. Notably, large firms were more likely to already be using the technology.
Regardless of the firm size or the stage of AI adoption, Michael Pastor, director of the James Tricarico Jr. Institute for the Business of Law at New York Law School, highlighted that the issues of security and ethical considerations associated with generative AI aren’t new and aren’t specific to AI. Firms need to adhere to the existing data security programs, controls, and client requirement policies when incorporating AI into their operations.
As the legal world continues to adapt to and navigate the challenges and potential of generative AI, the balance between technology adoption and risk management will remain a critical aspect of its evolution. Whether through the development of internal sandboxes or dedicated task forces, law firms are paving the way towards a future where AI and legal practice go hand in hand.