Generative AI - Opportunity, threat or both?

Generative AI - Opportunity, threat or both?

June’s session for senior in-house counsel - hosted by ½Û×ÓÊÓƵ in collaboration with Radius Law and Flex Legal – explored generative artificial intelligence for in-house teams.

Alison Rees-Blanchard, TMT Practice Support Lawyer at ½Û×ÓÊÓƵ UK took us on a whistlestop tour of Generative AI, the key legal issues and what in-house lawyers should be considering in relation to Generative AI now.

This was followed by a panel discussion with James Harper, Head of Legal, Global Nexis Solutions at ½Û×ÓÊÓƵ and Harry Borovick, General Counsel of Luminance, chaired by Shanthini Satyendra, Managing Legal Counsel, Digital Transformation and Technology at Santander.


Generative AI

Alison took us on a whistlestop tour of prominent AI issues, beginning by explaining some key terms in this area.

AI – AI is essentially the ability of a machine to exhibit intelligent behavior e.g. if a machine engaged in conversation without being detected as a machine, it has demonstrated human intelligence and would fall within the definition of AI.

Machine learning – a subset and form of AI which can learn and operate from its input data, as opposed to a result of explicit programming.

Generative AI (GAI) – the umbrella term for any AI capable of generating or creating new and original content. This could include the generation of text, music, coding etc. Put simply, it is a category of AI algorithms that are capable of generating net new outputs based on their training data.

Large language model (LLM) generative AI – types of generative AI that create language or text output. Examples of this type of generative AI include chatbots such as ChatGPT and translators.

For more on AI and key terminology, see these Practice Notes on Lexis+ Practical Guidance: and .

 

Large language models and ChatGPT

Large language models, put simply, are AI models that have been trained on very large volumes of data with a large number of parameters. They predict the next token or word in a sentence based on the surrounding context within that sentence. They are trained by removing text from an input and requiring the model to predict the missing text based on what it has been told previously as input training data.

ChatGPT is a publicly accessible large language model natural processing tool in the form of an AI chatbot generating conservational text.

For more on ChatGPT, see News Analyses: , and ChatGPT—user beware.

Why all the fuss?

Alison then highlighted some key risks novel to generative artificial intelligence in particular:

  • Greater autonomy – generative artificial intelligence, by its nature, is capable of acting with greater autonomy than general AI. This raises questions surrounding authorship and who owns the content that it generates
  • Greater complexity – generative AI is more complex making it more difficult to explain why it has made a decision or come to a certain output
  • Generic application – in many cases, generative AI has been designed with generic application for implementation across numerous use cases. This makes it difficult for developers to design the generative AI with specific risks in mind so the developer, implementer and user should be expected to take their measure of responsibility for risks arising from the use

 

Key issues raised by generative artificial intelligence

Alison went on to outline key issues and questions to consider regarding generative AI, its development and its use.

Personal data – do developers have sufficient rights to include any personal data in the training data for the purpose of training the generative AI model i.e. does the lawful basis on which the data was originally collected and processed still apply for the use of the data as training data? This is one of the reasons why ChatGPT was suspended in Italy recently.

For more information on the Italian ChatGPT ban, see News: , and News Analysis:

For more on AI and data protection, see Practice Note on Lexis+ Practical Guidance: .

Appropriate use – is it appropriate to use the AI in the manner that is being deployed? If making a decision about a person which will have a significant impact on them, is that an appropriate use?

Bias – are there biases in the training data which could result in discriminatory outputs?

Explanation of decisions – can a decision be explained, and can we understand why an AI model came to a certain decision?

For more on AI and explainability, see Practice Note: .

Transparency – do we have sufficient transparency over training?

Intellectual Property Rights – where proprietary rights exist in training data, do developers have the right to use that data for training purposes? Does the original owner have any rights in the output as derivative works? In the UK, computer-generated works are capable of copyright protection afforded to the person for whom the necessary arrangements for the creation of the work are undertaken. For works created by generative AI, is this person the model’s developer or the person that came up with the instruction or input prompt?

For more on AI and intellectual property, see Practice Notes on Lexis+ Practical Guidance: and News Analyses: , and .

Confidential information – companies are exercising caution with regard to inputting confidential information or trade secrets as training data or an instructional prompt. This would most likely destroy its confidential nature and likely put it in the public domain.

Regulation – regulators are considering how to implement guardrails against risks presented by generative AI.

For more on AI and regulation in the UK, see Practice Note on Lexis+ Practical Guidance: and News Analysis: .

For the EU position, see Practice Note on Lexis+ Practical Guidance: and News Analyses: , .

For the US position, see News Analysis: .

For more on key legal issues regarding AI, see Practice Note:

 

What should lawyers be thinking about now?

Alison highlighted that in-house lawyers can start putting in place responsible AI principles, focusing on key principles and issues to consider around these principles:

  • data management – good data management and governance is key
  • fairness – avoiding the creation or reinforcement of bias and discrimination
  • transparency and explainability – knowing how the model is trained and the ability to explain how it has reached a certain output
  • human oversight and accountability – ensuring appropriate and correct application of AI and sufficient human oversight to ensure outputs and data are accurate and make sense
  • privacy and cybersecurity – ensuring the business is still able to maintain appropriate levels of security
  • putting in place an AI governance framework – preparedness is crucial, and any AI framework is likely to involve similar stakeholders as GDPR since generative artificial intelligence is data driven. As such, following ICO guidance on AI will be important for businesses
  • understanding – Alison highlighted to need for lawyers to understand the technology as this will help lawyers understand relevance of terms and clauses in a contract


Panel discussion

Moving onto the panel discussion, Shanthini Satyendra, Managing Legal Counsel, Digital Transformation and Technology at Santander first highlighted three key functional features of AI which differentiates it from other technology that in-house lawyers may be used to dealing with before posing questions to James Harper, Head of Legal, Global Nexis Solutions at ½Û×ÓÊÓƵ and Harry Borovick, General Counsel for Luminance.

Autonomous nature – the autonomous nature of AI is always highlighted in relevant legislation, as the fact that it can act without being programmed can lead to issues, such as lack of oversight, making it different to technology, which is automated.

Black box nature – AI learns patterns from data without the need for explicit programming to create an output. As Alison highlighted, this can create issues around transparency and how a certain output was reached.

Scale effect – can produce large amounts of work which would otherwise take a lot longer to produce. This potential for scale however can raise issues.

Shanthini also advised not making this technology too technical as ensuring accessibility of issues surrounding AI is key so that the people from the top down understand what questions need to be asked.

 

What do in-house lawyers need to consider in relation to the business using AI?

James emphasised the importance of context in relation to this question and that this question should be considered from two angles: whether you are thinking about developing a product that includes generative AI; and whether you are thinking about using a generative AI tool in day-to-day work.

Developing a product that uses generative artificial intelligence – product teams should consider the following three step test when developing a product that includes an element of generative AI:

  1. What is the model this is being based on? – issues that should be considered at this stage include what the original model is, the data that was in it, whether it was public or private, and confidence in the original content of that model etc.
  2. How do you train it to carry out the desired function? – issues to be considered at this stage include what content set will be applied to train the product to carry out this function, whether you own this content or whether it is licensed, whether you have the required rights if it is licensed, whether the content is public content and whether you could still risk infringing a third party’s intellectual property etc.
  3. How does the tool evolve? – issues to be considered at this stage include whether there could be any challenges in potentially using a third party’s intellectual property to help evolve your tool etc.

Using generative AI tools in day-to-day work – if using generative AI in your day-to-day work proceed with caution, particularly in the role of a lawyer. James informed the attended of a recent in the US in which a lawyer used ChatGPT for research which produced hallucinations resulting in the lawyers citing false precedents and leading to sanctions by the US courts.

Harry broadly agreed and acknowledged that taking unverified information is irresponsible legal practice but highlighted the tendency of lawyers to overcomplicate the use of technology. He emphasised that lawyers must not be negligent when researching or carrying out tasks, whether AI is used to do so or not. Responsible use of AI is key.

For more on the use of generative AI, see Precedent on Lexis+: and News Analyses: , and .

Harry also emphasised the need to understand why product specific terms are important when purchasing an AI tool to use. When purchasing an AI product with potential personal data, intellectual property and confidentiality implications, clauses should be specific to that risk, what you are expecting from the tool and your business’s needs. Harry advised that a side letter or addendum should be used as a minimum to the purchasing agreement addressing these. Data protection impact assessments are a good starting point when considering these issues.

For more on AI and contractual considerations, see Checklist on Lexis+: .

 

How do you feel about AI?

James’ feelings on AI depended on its use. With regard to lawyers, the role will evolve over time and those who harness AI and use it to their advantage will survive and those that refuse to evolve with the technology will fall to the wayside. James did express that the one thing he thought AI wouldn’t deliver, at least for a considerable period of time, is an EQ led decision and develop the ability to differentiate what is good from bad.

Harry agreed that EQ is not a current strength of AI tools but believed that this skill would develop more quickly than anticipated and that the pace of change is underestimated. He expressed that such EQ is a qualitative review of the input data on a timeline and that if an AI tool seems like it has enough EQ and it is imperceptible, then it exists regardless of whether it was human genuine.

 

ChatGPT T&Cs

The guest speakers then discussed the terms and conditions of ChatGPT. Put simply, if using a publicly available tool, anything you input is essentially theirs. If using the business connected API, they assure that they won’t use your data or inputs save for any improvement of their models. It is important to be aware that the practical measures of this are unknown as they don’t disclose this. As such, many companies have implemented a blanket ban on ChatGPT use; however Harry considered that this could change if there is an EU-based secure version created.

James highlighted challenges surrounding liability and the ability to prove that your content has been inputted into the tool illegitimately. Once such data has been inputted, it is extremely challenging to then remove and disassociate it. He also highlighted a in which OpenAI is being sued in the US in relation to ChatGPT hallucinating a case accusing a man of fraud and embezzlement. Based on their terms and US libel law however, James thought the claim had severe challenges.  

For more content on artificial intelligence, visit our . 

 

Lexis+ is the UK's leading research and practical guidance solution for in-house teams. Want to see for yourself? Try it free for 7 days by filling out the form below.


Latest Articles:
About the author:
Ellen is an assistant commissioning editor in the LexisPSL hub. She graduated in International Law from the University of Leeds in 2020 and has been at ½Û×ÓÊÓƵ UK since January 2022. She commissions core content for the Tax, Planning, In-house advisor, Competition and Scottish practice areas and case analysis content for all 36 practice areas. She has a particular interest in competition and intellectual property law and the life sciences sector.