Generative AI, the Bar, and overcoming hallucinations

Generative AI, the Bar, and overcoming hallucinations

Generative artificial intelligence (AI) has become a key talking point across the world. Initially slow to adopt the tech, the legal sector has now opened its arms and embraced it. In the last six months, according to our new report, Lawyers cross into the new era of generative AI, the use of this rapidly evolving tech in law has doubled. As of January 2024, more than a quarter (26%) of respondents revealed they're using generative AI tools at least once a month, a noticeable rise from only 11% of respondents in July 2023

The rise of generative AI has amplified many of its benefits, allowing the less tech-savvy to take advantage. But its rise has also showcased the risks, of which barristers are well aware. For example, only 10% of legal experts in our report said they had no concerns about generative AI, 90% cited some risks, 57% of respondents noted concerns and 55% noted issues with trusting generative AI platforms.

AI has arrived at the Bar, but these  still limit widespread adoption. They're justified, but simple measures can help barristers limit or even completely remove this risk.

Below we discuss the ethical implications of hallucinations, explore why they're a valid concern, and look at the best ways for barristers to avoid them.

Why hallucinations are the primary concern

Our report, Lawyers cross into the new era of generative AI, unveils why hallucinations are a core concern at the Bar. In the simplest terms, the risk is the reliance upon erroneous information, and the degree of risk depends on the application. For a humanities student, hallucinations result in an incorrect sentence, and maybe a bad grade. For a marketing assistant, the hallucination might lead to an incorrect social media post, or false stats in a presentation leading to a negative opinion of their competency. But for barristers, the risks are far more excessive. A hallucination could cause significant harm.

Barristers trade on their reputation which is upheld by consistent and quality work, providing the best possible answers, having the ability to balance the strengths and weaknesses of any given case, and the provision of accuracy in advocacy.

Hallucinations pose a risk to all of these. The real-world impacts are serious. Hallucinations could lead to false legal advice, resulting in serious, incorrect legal consequences. They might cause the loss of a case, due to a flawed strategy, compromise confidentiality or damage testimony. In short, hallucinations could undermine a barrister's reputation, cause the loss of clients, the loss of revenue, and lead to in legal practice. 

Discover Lexis+ AI Insiders. AI has arrived and the landscape is changing. Stay ahead of the curve. Become an AI Insider today.

How to avoid hallucinations 

It's clear the Bar needs to . But hallucinations are dependent on the AI platform, not the tech. It's been said before, generative AI systems are only as good as the data upon which they're trained. Poor-quality AI platforms hallucinate more frequently because of data limitations, model complexity and overfitting, or algorithmic issues. Using opaque AI platforms that tell you little about their datasets, are trained on insufficient inputs, that lack human oversight, and offer little awareness of real-world impacts, will certainly result in inaccurate, unethical, and low-quality outputs.

Poor-quality platforms are often free-to-use, hence the absence of human oversight and the general standard of data sets. That's not to say people can't still take advantage of free-to-use platforms, with a few precautionary measures, ethical risks can be minimised. 

In our report, Joe Cohen at says that lawyers can avoid hallucinations on free-to-use AI tools by double-checking all outputs and verifying them against other sources. But Cohen warns: "There are certain use cases that are not recommended on free-to-access tools. For example, asking AI to cite cases for you."

Dr. Gerrit Beckhaus at echoes that analysis, suggesting that, while hallucinations are intrinsic to generative (LLMs), barristers can reduce risk by providing AI systems with more information and verifying the request. 'Prompting and [providing] context can reduce the risk, alongside human guidance that can help to validate, assess, and contextualise the results produced with the help of generative AI.' So lawyers who use free-to-use AI systems will need to provide additional prompts, improve the context of inputs, double-check outputs, and still practice caution.

The issue with this is it might prove quite time-consuming and the risk of hallucinations still exists. A more sensible route is finding trusted AI platforms. Barristers should use platforms that safeguard against unreliable inputs, boast human oversight, and practice real-world awareness. Using platforms that rely on trusted and known datasets is key, as suggested by Rachita Maker at . Using trusted datasets, Maker explains, drastically improves the reliability of outputs.

These are the AI systems where lawyers can place their trust, a point broadly echoed in our report. 65% of respondents felt confident using tools grounded on legal content sources, such as Lexis+ AI. One of them was Samuel Pitchford, Solicitor for Pembrokeshire Country Council: "We avoid using generative AI for research purposes at present to avoid hallucinations...But the development of 'closed' generative AI tools, trained exclusively on legal source material and available only to subscribers should be less prone to hallucinations and would allow our team to use generative AI for research."

Hallucinations are a huge risk at the Bar, but simply by practising awareness, or by opting for more trusted AI platforms like Lexis+ AI, barristers and other legal experts can drastically minimise that risk.

Download our report, Lawyers cross into the new era of generative AI, and explore our insights today!


Related Articles:
Latest Articles:
About the author:

Laura is the Social Media and Content Marketing Manager at ½Û×ÓÊÓƵ UK. She has a decade of experience creating engaging and informative content for a variety of industries, including higher education and technology. Â