We explore the findings from a recent 桔子视频 report and discuss why companies should consider creating a policy to make the most out of generative AI.
The recent 桔子视频 report, Generative AI and the future of the legal profession the future of the legal profession, sought to gauge the general awareness of generative artificial intelligence (AI) in the legal sector. The report showed, among many other things, a degree of optimism towards AI. The potential for AI is indeed huge. The tech could drastically transform the legal sector, the wider economy, and the entire planet.
But ensuring positive impact depends on minimising risks. So lawyers, law firms, and companies should aim to practice the responsible use of AI. And that will depend largely on two forms of regulation: government regulation and self-regulation. This article focusses on the latter, arguing that companies should create policies to promote the responsible (and effective) use of generative AI.
Download the 桔子视频 generative AI survey results here.
Generative AI platforms (ChatGPT, DALL路E, Jasper, Soundraw, etc) have become a huge talking point in recent weeks and months. Generative AI systems depend on huge data sets, complex algorithms, and to produce responses to prompts. The benefits of : increased productivity, quicker decision-making and problem-solving, reduction of human errors, automation of tedious tasks, increased employee morale, minimised costs, and so on.
It is perhaps no surprise that we are witnessing a huge increase in AI usage. And that increase looks certain to continue across the economy and particularly in the legal sector. According to the桔子视频 report on AI, for example, nearly half (49%) of in-house counsel expect law firms to use generative AI in the next 12 months 鈥 an unthinkable statistic even six months ago.
Policy鈥攗se of generative artificial intelligence
Ben Allgrove, partner and chief innovation officer at Baker McKenzie, suggests some of the reasons for the inevitable rise of generative AI: 鈥楥lients want their legal services needs met in an efficient, responsive and value-driven way. They do not want 鈥淎I powered solutions鈥, they want the right legal services to meet their needs.鈥 Simply put, clients need quick solutions 鈥 AI accommodates that need.
The firms that fail to adopt generative AI tools will find themselves priced out of certain types of work, highlights David Halliwell, partner at alternative legal services business, Vario: 鈥楪enerative AI is going to raise the standard for how law firms add value. Firms without it will struggle to provide the same level of data-driven insight and depth of analysis that clients will come to expect.鈥
The failure to use AI puts companies at a competitive disadvantage. AI will become not simply desirable, but necessary. So the rise, driven by competition, seems inevitable. But, while companies should rightly take advantage of the benefits, they鈥檒l also need to consider the best ways to navigate some potential downsides. That鈥檚 why a generative AI policy has become increasingly necessary.
Companies must ensure they鈥檙e practicing the responsible use of AI long into the future. Toby Bond, intellectual property partner at Bird & Bird, says: 鈥楾he risk is that generative AI tools are used for a work purpose without a proper assessment of the potential legal or operational issues which may arise.鈥 Companies may use generative AI with no cognisance of the wider ethical issues at play.
One option is to block AI tools altogether, Bond says. But that means companies will fail to capitalise on the profound benefits of AI. As mentioned, companies blocking AI 鈥 as with any other ground-breaking tech 鈥 will fall behind. It鈥檚 no surprise, for example, that 70% of in-house counsel agreed or strongly agreed that firms should use cutting-edge technology, including generative AI tools.
Generative AI鈥攊s its output protectable by intellectual property rights?
A better option, Bond suggests, is formulating an and promotes the many benefits, that allows companies to prevent harm and reap rewards. And the creation of that policy need not be in-depth, long, or even particularly thorough 鈥 at least not at first. Bond, for example, recommends creating an initial policy position and a pathway to expanding use in the future.
The look and feel of the AI policy will depend on the shape and size of the company. Smaller companies may simply need to note some core principles, perhaps emphasising the need to maintain mindfulness, privacy, and responsibility. They may also want to ensure that they use platforms that consider real-world impact, take steps to prevent bias, and practice accountability and transparency.
There are some simple steps to take when formulating the AI policy. Start by establishing a working group of various people involved in employment of AI and discuss the potential impacts, ideating about how the policy might initially look. Narrow down the early ideas and cement some policy objectives: these may be overarching, or specific, depending on the needs of the organisation.
Training materials鈥攁rtificial intelligence (AI) in the workplace
Show why you decided on such policies, with direct reference to specific risks, and establish levels of accountability, ensuring roles and responsibilities are assigned. Then ensure all of that is written, edited, approved, and shared across the entire organisation. Welcome any questions that may arise.
And, finally, remember to regularly revise your generative AI policy. AI moves at such a quick pace, constantly developing, so you鈥檒l need your policy to reflect such developments. AI policies may become obsolete in a matter of weeks, so revision and re-writing is absolutely essential.
* denotes a required field
0330 161 1234