Lawyers Are Rapidly Embracing AI: Here’s How to Avoid an Ethics Disaster

Image: Unsplash

Lawyers Are Rapidly Embracing AI: Here’s How to Avoid an Ethics Disaster

Imagine a world where legal research is conducted by lightning-fast algorithms, mountains of contracts are reviewed in minutes and legal briefs are drafted with the eloquence of Shakespeare. This is the future promised by artificial intelligence (“AI”) in legal practice. ...

January 23, 2024 - By Anil Balan

Lawyers Are Rapidly Embracing AI: Here’s How to Avoid an Ethics Disaster

Image : Unsplash

Case Documentation

Lawyers Are Rapidly Embracing AI: Here’s How to Avoid an Ethics Disaster

Imagine a world where legal research is conducted by lightning-fast algorithms, mountains of contracts are reviewed in minutes and legal briefs are drafted with the eloquence of Shakespeare. This is the future promised by artificial intelligence (“AI”) in legal practice. Indeed, AI tools are transforming this landscape already, venturing from science fiction into the everyday realities of lawyers and other legal professionals. 

However, this advancement raises ethical and regulatory concerns that threaten the very foundation of the justice system. At a time when the Post Office Horizon scandal (in which more than 900 sub-postmasters in the United Kingdom were prosecuted after faulty software wrongly made it look like money was missing from their branches) has shown how a trusted institution can quickly wreck its reputation after introducing an opaque algorithmic system, it is important to anticipate potential pitfalls and address them in advance. 

We have already seen generative AI used at the highest levels of the profession. Lord Justice Birss, deputy head of civil justice in England and Wales, disclosed a few months ago that he had used ChatGPT to summaries an area of law, then incorporated it into his judgment. This marked the first instance of a British judge openly using an AI chatbot – and it is just the tip of the iceberg. To date, the fastest adopters of generative AI in the legal profession have been lawyers working in-house for large companies, with 17 percent using the technology, according to legal analytics giant LexisNexis. Law firms are not far behind, with around 12 percent to 13 percent using the technology. In-house teams may be ahead because they are more motivated to save costs. 

But large law firms appear as though they will catch up to in-house legal teams, with around 64 percent if large firms actively exploring this technology, compared to 47 percent of in-house teams and around 33 percent of smaller legal practices. In the future, large law firms may specialize in specific AI tools or develop in-house expertise, offering these services as a competitive advantage. 

The vast majority of lawyers think this technology will have a discernible effect, according to a 2023 LexisNexis survey of over 1,000 UK legal professionals. Of these, 38 percent said it would be “significant,” while another 11 percent said “transformative.” Most respondents (67 percent) thought there would be a mixture of positive and negative effects, however, compared to only 14 percent who were wholly positive and 8 percent who were more negative.

AI in action

Here are some examples of what is arriving …

> Legal research: AI-powered research platforms like Westlaw Edge and Lex Machina can now scan vast legal databases, identifying relevant cases and statutes with pinpoint accuracy. 

> Document review: tools like Kira and eDiscovery can now sift through many documents, highlighting key clauses, extracting vital information and identifying inconsistencies.

> Case prediction: companies like Solomonic and LegalSifter are developing AI models that can analyze past court decisions to predict the likelihood of success in certain types of cases. Still in their infancy, these tools offer valuable insights for strategic planning and settlement negotiations.

> Bail and sentencing: tools such as Compas and Equivant are now employing AI to help practitioners with these decisions. 

These advancements hold immense potential for improving efficiency, reducing costs, and democratizing access to legal services, but what about the challenges?

Ethical & regulatory concerns

AI algorithms are trained on datasets, which can reflect and amplify societal biases. For example, if a city has a history of over-policing certain neighborhoods, an algorithm may recommend higher bail amounts for defendants from those areas, regardless of the risk of flight or re-offending. Similar biases could affect the use of AI to hire lawyers within firms. There is also the potential for skewed results from the tools for legal research, document review and case prediction. Equally, it can be difficult to understand how an AI arrived at a particular conclusion. This could undermine trust in lawyers and raise concerns about accountability. At the same time, over-reliance on AI tools might undermine lawyers’ own professional judgment and critical thinking skills. 

Without proper regulations and oversight, there is also a risk of misuse and manipulation of these tools, jeopardizing the fundamental principles of justice. In trials, for example, skewed training data may disadvantage trial participants based on factors unrelated to the case. 

The way forward

Here are a few of the ways in which these issues can/should be addressed …

Bias: We can mitigate by training AI models on datasets that represent the diversity of society, including race, gender, socioeconomic status and geographical location. Frequent and systematic audits of AI algorithms and models should also be conducted to detect biases.  AI developers like OpenAI are already taking such steps, but it is very much a work in progress and the results need to be monitored carefully. 

Transparency: Developers, such as IBM, are devising a class of techniques and technologies known as explainable AI (“XAI”) tools to demystify the decision-making processes of AI algorithms. These need to be used to develop transparency reports for individual tools. Full transparency on every neural connection may be unrealistic, but things like data sources and the AI’s general functionalities need to be visible.

Regulations and oversight: Clear legal guidelines are essential. This should include prohibiting AI tools trained on biased data, mandating transparency and traceability of data sources and algorithms, and establishing independent oversight bodies to audit and assess AI tools. Ethics committees could provide additional oversight for the legal profession. These could be entirely independent, though they would be better established and overseen by a body like the Solicitors Regulation Authority.

In short, the rise of AI in legal practice is inevitable. Ultimately, the goal is not to replace lawyers with robots but to empower legal professionals so that they can focus more on the human aspects of law: empathy, advocacy and pursuing justice. It is time to ensure this transformative technology serves as a force for good, upholding the pillars of justice and fairness in the digital age.


Anil Balan is a Senior Lecturer in Professional Legal Education at King’s College London. (This article was initially published by The Conversation.)

related articles