In a remarkable yet cautious move, the judicial system of England and Wales, steeped in a millennium of tradition, has recently granted judges the green light to use artificial intelligence (AI) in drafting legal opinions. This development, announced by the Courts and Tribunals Judiciary, marks a tentative foray into the future for a profession typically slow to adopt technological advancements.
AI in Legal Opinions: A Guarded Approval
The judiciary’s announcement, detailed last month, permits the use of AI in the composition of legal opinions but draws the line at its application in research or legal analyses. This restriction stems from concerns about AI’s potential to fabricate or distort information, leading to biased or misleading outcomes.
Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales, emphasized the importance of judicial caution and responsibility in utilizing AI: “Judges do not need to shun the careful use of AI, but they must ensure that they protect confidence and take full personal responsibility for everything they produce.”
The Legal Community’s Stance on AI
The integration of AI in the legal sector is a topic of widespread discussion and concern. Ryan Abbott, a University of Surrey law professor and author, notes, “AI and the judiciary is something people are uniquely concerned about… AI may be slower disrupting judicial activity than it is in other areas and we’ll proceed more cautiously there.”
Global Perspectives and Guidance
The initiative by England and Wales is among the earliest in establishing guidelines for AI in the judicial system, though not the first. The European Commission for the Efficiency of Justice issued a charter five years ago addressing ethical principles for AI in courts. However, the U.S. federal court system and its state counterparts have yet to establish a unified approach, with individual courts setting their own rules.
The Guidance: Acceptance with Reservations
The guidance represents a nuanced acceptance of AI, balancing technological progress with caution. Giulia Gentile, a lecturer at Essex Law School, critiques the lack of an accountability mechanism in the guidance, questioning its enforceability.
The guidance includes several warnings about AI’s limitations, particularly concerning chatbots like ChatGPT. Judges are advised against entering confidential information into public AI chatbots and to be mindful of the predominantly U.S.-centric legal material available to AI systems.
AI as a Secondary Tool
Despite the limitations, the guidance suggests judges can utilize AI as a secondary tool for writing background material or summarizing well-known information. AI is encouraged for locating familiar material or aiding in mundane tasks like email composition but is not recommended for independent legal analysis or reasoning.
Appeals Court Justice Colin Birss shared an instance where ChatGPT assisted him in crafting a paragraph in a ruling on a familiar legal topic. “I asked ChatGPT can you give me a summary of this area of law, and it gave me a paragraph,” he said. “I know what the answer is because I was about to write a paragraph that said that, but it did it for me and I put it in my judgment. It’s there and it’s jolly useful.”
Conclusion: A Cautious Embrace of AI in Law
The decision by the judiciary of England and Wales to allow AI in legal opinion writing, albeit with caution, signifies a progressive yet restrained approach to technology adoption in the legal sector. It sets a precedent for other judicial systems globally, highlighting the need for a careful balance between embracing modern technology and maintaining the integrity and reliability of legal processes. As AI continues to evolve, the legal community will likely continue grappling with how best to integrate these tools while ensuring accuracy, impartiality, and the protection of confidential information.