The launch of ChatGPT last November ignited fervor within the legal industry. As the promise of generative AI dangled the potential to streamline tasks like drafting contracts and analyzing case law, many envisioned a transformative shift. Yet, with great potential come significant challenges; the primary being the possibility of AI “hallucinations” — producing incorrect or falsified information, coupled with the risk of inadvertent copyright infringements.
Tech giants, from Google to Microsoft (a key supporter of OpenAI), have quickly rolled out their AI-driven chatbot offerings. As they race ahead, niche start-ups such as Harvey and Robin AI have emerged, tailoring AI solutions to cater specifically to legal professionals. With PitchBook predicting the legal software realm to burgeon to a value of $12.4bn this year, with an annual growth of roughly 5%, Kerry Westland, the spearhead of innovation and legal tech at Addleshaw Goddard, remarked, “We have seen all of these companies spring up. It’s an exhilarating field that’s taken significant strides in a mere 10 months. We’re racing to keep pace, and we are deeply entrenched in the exploratory phase.”
In its quest for tech-enabled efficiency, Addleshaw Goddard has critically assessed AI offerings from over 70 firms, handpicking eight for in-house trial runs. These pilot projects encompass both legal tech applications and other AI-driven solutions. Here, lawyers harness generative AI’s prowess to sift through documents, identifying clauses and translating legal jargon into layman’s terms. Yet, as Westland points out, the technology isn’t flawless. The capricious nature of AI, occasionally returning varied answers or verbose outputs, is just the tip of the iceberg. The industry’s overarching challenge remains the AI’s propensity to craft statements presenting them as factual, even when they are mere fabrications. The ramifications? In a striking incident from June, two attorneys and an entire law firm faced penalties after a legal brief was discovered to cite cases conjured by ChatGPT.
Data confidentiality adds another layer of complexity. While Addleshaw Goddard permits its attorneys to employ ChatGPT, it restricts the use of confidential data. UK’s Travers Smith has taken a stricter stance, barring the tool’s use altogether. As Shawn Curran, the authority on legal technology at Travers Smith, elucidated, “There was a term in the API [application programming interface] that said data put into the system would be used to improve and develop the services.” Recognizing the inherent risks, the firm resorted to an alternative: YCNBot, underpinned by Microsoft and OpenAI’s enterprise solutions. This AI is currently being tested on mock contract reviews and simulated litigation disputes. Curran termed this exploration as the journey towards ‘safer’ use, emphasizing the evolving nature of risks.
Sijmen Vrolijk, the IT director at NautaDutilh, weighed in on the larger picture. Though optimistic about AI reshaping lawyers’ workflows, he dispelled fears of AI-induced job cuts. He stated, “Gen AI has spurred monumental buzz, now seemingly waning. Generative AI, in my view, is yet to reveal truly revolutionary capabilities.” However, Westland highlighted a unique industry challenge – the sheer difficulty of securing legally centered generative AI tools. Firms now find themselves on waiting lists, or at times, vendors are selective, preferring only a handful of partnerships.
Lastly, evaluating the economic implications of these tools emerges as another hurdle. With Westland adding that, “these tools are not cheap,” the conversation turns to a broader theme. “Everyone talks about the time-based model in law. The pressing question now is: How do we appraise legal work’s worth? If AI can expedite a task from three weeks to three days, especially during time-sensitive deals, what’s the value proposition?”
In essence, the advent of AI in the legal sphere heralds both unparalleled opportunities and intricate challenges, underscoring the importance of cautious optimism.