Speech by the Master of the Rolls to The Professional Negligence Bar Association - The Master of the Rolls delivered the 25th memorial address in honour of Lord Peter Taylor of Gosworth [2024] UKSpeech O7RCQ (22 May 2024)
BAILII is celebrating 24 years of free online access to the law! Would you
consider making a contribution?
No donation is too small. If every visitor before 31 December gives just £1, it
will have a significant impact on BAILII's ability to continue providing free
access to the law.
Thank you very much for your support!
[New search]
[Contents list]
[Printable PDF version]
[Help]
Speech by the Master of the Rolls to The Professional Negligence Bar Association
25th memorial address in honour of Lord Peter Taylor of Gosforth
The Honourable Society of Lincoln’s Inn
Wednesday 22 May 2024
Damned if you do and damned if you don’t: is using AI a brave new world for professional negligence?
The Right Honourable Sir Geoffrey Vos
Introduction
- Many thanks to Victoria Woodbridge for inviting me to deliver this 25th address in honour of Lord Taylor of Gosforth, former Lord Chief Justice of England and Wales, whom we all remember fondly. He died far too young in April 1997.
- I have started several recent lectures by reiterating the need for all lawyers and judges get to grips with new emerging technologies in general and AI in particular. AI is changing every profession and every industry fundamentally, not just the law. For professional negligence lawyers, this is indeed a watershed moment.
- Professional negligence lawyers may be at the epicentre of the brave new world I have referred to in my title.
- As we work out what artificial intelligenceshould, and what artificial intelligenceshould not, be used to achieve for individuals and businesses in societies across the world, people will quickly move on to consider liability. That liability will likely arise as much (a) in relation to AI having been used, as (b) in relation to AI not having been used, in any particular situation.
- To give a simple example, when, as it can already, AI can help diagnose whether a skin defect is cancerous, doctors may be as much liable for using an available AI tool wrongly, as they might be liable for not using it at all. I will return to this in a moment. I use a medical example because lawyers can be more dispassionate about other professionals. They tend, in my experience, to be more prescriptive, perhaps even didactic, when talking about the use of AI within their own discipline.
- There are, I think, two common schools of thought.
The Pros and the Antis
- The first school of thought complains that AI is dangerous, that it hallucinates, that it is prone to bias and that it can create inaccuracies that are particularly hazardous for the legal sector. In short, in the legal space, they say, the public needs legal advice and legal decisions from human lawyers and human judges in whom they can have complete confidence. This theme leads to the conclusion that, if lawyers and judges are ever to use AI, they must do so only in the most regulated of circumstances and with the greatest care and circumspection. Since AI can be used for fraudulent fakery, it must be shunned. This same approach can be applied equally to doctors, accountants, architects, engineers, actuaries and almost every other conceivable professional practice and business, but, as I say, lawyers tend to be more concerned to single their own profession out for special treatment.
- The second school of thought is quite different. It contends that clients will, sooner rather than later, become unwilling to pay for legal tasks to be performed by a human lawyer, when that same task can be done better, quicker and much more cheaply by an AI. LLMs are about to become far more reliable. They will be integrated with database technology so as to reduce hallucination, inaccuracy and bias. AI can process large datasets, summarise legal materials, undertake legal research and resolve complex problems far more effectively than human lawyers and judges. Whilst there are some inter-personal tasks that humans will still need to undertake, the grunt work will very soon be done by machines. Again, these arguments apply to every other professional practice and business in analogous ways.
- What I hope to do in this lecture is to compare these two perspectives and see whether there is a median position. Along the way, I shall try to explain the title I have chosen. For sure, I conclude, there will be problems if you adopt AI hook line and sinker (damned if you do). And for sure there will be equally serious problems if you shun or ignore AI (damned if you don’t). But you may not have thought how professional negligence lawyers will be at the epicentre of the issues thrown up by: (a) the ever-increasing intelligence of machines, and (b) the ever-increasing number of purposes for which machines can be used, and for which consumers and businesses will want, even require, AI to be used.
- What professional negligence lawyers do that is special in this context is to look at the law surrounding how other people do their jobs. AI will have a profound effect on how other people do their jobs.
- May I start with two points about current developments in AI technology.
Current developments
- First, AI is improving and changing very rapidly indeed. GPT 4, even GPT4o, will, it seems, within 18 months, be superseded by GPT 5, which will have surprising new capabilities that are expected to be game changing. Artificial general intelligence is not far away. I have heard that multiple new kinds of AI are being developed that will take the machine’s capabilities way beyond the LLM. Machines will obviously never operate in the same way as the human brain, but we will need to adjust fast to their capabilities. There is plainly a danger that lawyers and judges will fail to understand and respond to the seismic changes that are occurring in what machines can do and will be doing.
- The second point is this. Real intelligence probably requires knowledge, reasoning and communication. AI has access to a large volume of data and knowledge and has good, perhaps great, communication skills. Its weakness at the moment is in the field of human reasoning. Its even greater weakness is in the application of that intelligence in the field of empathy. It has been said, and it is important to understand this, that machines cannot suffer or feel pain; they cannot cry or laugh. They have no conscience. Many aspects of the rule of law and the concept of justice itself depend upon empathy, conscience, even guilt, and our reactions to suffering. It is, therefore, incredibly important as AI is increasingly embraced and adopted that we, as lawyers, understand what it does well and what it does less well. That understanding will guide us towards the maintenance of human values in an ever more technologically enabled legal environment.
Four Questions
- With that introduction, I want to examine:
(1) whether it is realistic to say that AI is too dangerous to be used by lawyers and judges,
(2) where AI is likely to become part of the everyday advisory and dispute resolution armoury and where it is not,
(3) what we can do to take advantage of the best and to protect ourselves from the worst of AI, and
(4) how the work of liability lawyers and judges will look once the “machine age” is further advanced.
Is it realistic to say that AI is too dangerous to be used by lawyers and judges?
- The short answer to this question is “no”. And the same applies to most, if not all, professionals and businesses. But, that said, it will be necessary for all lawyers, very quickly I think, to learn a great deal more about the dangers that AI poses and the advantages it offers.
- It is obviously very worrying for the music industry and intellectual property owners that programmes like Suno can produce, in seconds, excellent songs, with excellent lyrics, on any subject, in any genre. It is equally concerning that damaging deep fake images can be produced using readily available programmes without great expertise. And fraudsters already use AI to deceive and defraud. There are some even more concerning uses of AI affecting privacy, military applications and the democratic process itself. In the law, we worry about whether judges may be replaced by automated decision-makers.
- It is most likely that we will never be able to eradicate the risks I have mentioned, and probably many more. But I do not think that we should use them as excuses either for refusing to learn all one can about AI and other emerging technologies or for refusing to embrace those technologies for what they can do to help people in general and the citizens who need legal services and dispute resolution in particular.
- Instead, therefore, of suggesting that we can stop AI being used, when we cannot, lawyers need to use their powers of human reasoning and empathy to ensure that two things occur. First, we need to ensure that lawyers are educated about the risks that AI poses. Secondly, we need to ensure that lawyers are trained to know how to use and how not to use AI, and how to protect clients, businesses and citizens from those who will inevitably try to use AI for malign purposes. Yet again, this applies as much to other professionals and businesses as it does to the legal profession.
- There is, I think, a genuine risk, bearing in mind the speed at which these technologies are developing, that lawyers and judges will move too slowly to understand and respond to AI and its effects. The school of thought that pretends that it is too dangerous for lawyers and judges to get involved is a real problem. Only if we do get involved and we educate ourselves fully, can we be best prepared to serve the public better in the future AI-enabled world.
- There is a further reason why it is unrealistic to suggest that it is too dangerous to use AI. That is economic reality. The clients of lawyers and all other professionals and businesses will, as I have already intimated, not ultimately be prepared to pay for what can be done without significant charge by a machine. Though I have stated that principle in an unqualified way, it is in fact a principle which has limits.
- You will recall the Law Commission’s excellent reports a few years ago now about liabilities that will arise from the use of automated vehicles. Reading those reports then, and seeing the Automated Vehicles Act 2024 which received Royal Assent on Monday 20 May 2024, might have led you to believe that, within months, our roads would be flooded with self-driving vehicles. Not so; not at the moment, anyway. That may be partly because there are some machines that people are actually very hesitant about. Only yesterday, the Times reported a YouGov survey that found that more than two-thirds of people would feel unsafe in an automated driverless vehicle. People will have confidence in some automated processes and not others. The trick for the future will be working out which is which. I will return to this point.
- The second limitation on the principle that people will not pay for services that a machine can provide for free is that legal advice and decision-making concerning the areas that most closely reflect our humanity and empathy will likely be the last to be overtaken by AI. Parents are likely to need human lawyers to advise about care proceedings, and criminal sentencing is likely to be a human activity, for many years to come. Indeed, lawyers will always, I think, be needed to explain the legal position to clients, even if the advice and decision-making is undertaken or assisted by machines.
- But subject to those caveats, I cannot see individuals and businesses accepting lawyers charging, for example, for armies of para-legals and assistant solicitors to check IPO documentation that a machine can check for nothing. I cannot see clients paying large sums for manual legal research to be undertaken when specialist AI-driven research tools exist (as some do already).
- So, we need to ask ourselves where machine lawyering will become commonplace and where it will not. We need to try to make sure that we understand what machines can do and, as I have said, which automated legal processes clients will trust and which automated processes they will not trust. Again, this is all about human confidence in technology.
- Only by understanding these things will we be able to adapt the training for our lawyers and other professionals, so as to prepare them for the work they will actually have to do in the future.
Where is AI likely to become part of the everyday advisory and dispute resolution armoury and where will it not?
- Under this heading, I will descend to some greater particularity about the near term. There are many things that AI can already do more quickly and cheaply than lawyers. It can draft contracts, summarise large datasets, find and summarise authorities from large databases, predict court outcomes, and draft legal advice and legal submissions and even draft judicial decisions. The fact that it can do these things does not mean it should be used to do so.
- The judiciary moved quickly after the advent of GPT 3.5 to issue simple guidance for judges as to the use of AI. The three principles are equally applicable to lawyers and other professionals.
- First, judges and lawyers need to understand what generative AI does and what it does not do. Secondly, judges and lawyers must not feed confidential information into public LLMs, because, when they do, that information becomes theoretically available to all the world. Thirdly, when judges and lawyers do use a LLM to summarise information or to draft something or for any other purpose, they must check the responses before using them for any purpose, because they, not Chat GPT, are responsible for their work product.
- Here we must lift our eyes from the relatively unsophisticated LLMs of today towards the machines of, say, two years hence. Those machines will, as I have said, integrate LLMs with databases and be far less likely to hallucinate or be obviously prone to bias. Indeed, GPT 4o already browses the web for you when you ask it something about post-2022 events. Lawyers will need to use these AIs for what they do best in the interests of their clients. If they don’t, many clients will likely go to those that do. But individuals and businesses will still need the work product of machines explained to them. They will still need advice about what should be done by a machine and what should not. That is why I have frequently said that predictions about the end of lawyers are much exaggerated.
- But I do think there is a further problem on the mid-term horizon. Machines are likely in future to have capabilities that make it hard and expensive, if not actually impossible, for humans to check what they have done. This is where professionals need to begin to develop systems to make sure that humans can be assured that what machines have done is reliable and usable, as opposed to dangerous and unreliable. In the law, we will need to explore how the product of a machine can be effectively challenged.
- I will give you one current example. I asked GPT4o to summarise what the UKSC had held in a case I knew well. It did a far better job than its predecessors had done. But we will all need to know when the stage has been reached at which the product of AI in general – or specific legally trained AIs – can actually be trusted so that we no longer have the need to do the job it has done all over again from scratch. I emphasise that stage has not yet been reached.
What can we do to take advantage of the best and to protect ourselves from the worst of AI?
- There is only one answer to this question. Lawyers and judges need to keep in mind the fundamentals of legal practice and the justice system. Those fundamentals are the interests of justice and the rule of law.
- We should, therefore, embrace AIs that improve access to justice and provide accessible legal services for those otherwise unable to obtain it. We should embrace AIs that speed up the legal process. We should shun the luddites who want to continue to use manual processes at greater cost to their clients when the AI can do the same job quicker and cheaper – there are many such examples.
- But we should, I think, and as I have explained, be cautious about the output of any AI that a human cannot evaluate. For this, education and training is again central. If we cannot evaluate the quality of the AI’s output, we will not be able to work out whether its work product is something we should adopt or not. This will require massive effort from lawyers, regulators, rule-making bodies and Government. The task has hardly started. One might note in passing that the European Council actually passed its AI Act into law yesterday. The EU’s “flagship legislation” claims to follow a risk-based approach, which means that the higher the risk that the AI will cause harm to society, the stricter will be the rules.
- So, as I have been saying, the real challenge is going to be evaluating the reliability of an AI’s work product. If an AI can produce legal advice that is, say, 98% reliable, that might compete favourably with the best of lawyers. But how can we know? By what parameters will we determine that a professional is using all due professional skill care and diligence when they use an AI that is, say, 99% accurate, but not when using one that is, say, 95% accurate? And, of course, accuracy cannot anyway be gauged on a linear scale. This may become a whole new science in itself.
- I believe that, if judges and lawyers continue to be driven by their commitment to the delivery of justice and the rule of law, they should not go far wrong. They will need to learn and to embrace change, but ultimately, we may still hope that the changes will be beneficial. I know that the Government has recently published Professor Yoshua Bengio’s “International Scientific Report on the Safety of Advanced AI”. That report focuses on the risks and the speed of transition. In this respect, I would hope that the legal profession can be part of the solution. It is comforting, but perhaps not enough, for the world’s 16 leading AI companies to pledge, as they did yesterday, that they will not develop or deploy any AI system that poses an extreme risk to humanity.
- Against that background, I want to turn to explain why I started by saying that professional negligence lawyers, of all professionals, may turn out to be at the centre of AI-related issues in what I have called the “machine age”. Perhaps it is already obvious.
How will the work of liability lawyers and judges look once the “machine age” is further advanced?
- To recapitulate a little, I am fairly clear that the legal community must adopt a median position between the first and second schools of thought that I described earlier. There are certainly dangers in AI, but we will not avoid them by ignoring technology as some in the legal sector seem keen to do. Our clear obligation is to follow the path of justice and the rule of law to educate lawyers young and old about the risks that AI poses and ensure that all lawyers are trained to know how to use and how not to use AI.
- But what about the special position of professional negligence lawyers? When I first mentioned that there was as big a problem, in legal terms, about there being liability for a failure to use AI as there was for the misuse of AI, I was greeted with some surprise. But this proposition still seems obvious to me. As AI tools become increasingly capable, increasingly specialist and increasingly available to every consumer, every business and every professional, the problem of being damned if you do and damned if you don’t will become a widespread issue.
- Professionals and other providing specialist services are generally liable, under the law of negligence, if they fail to exercise reasonable skill, care and diligence in performing their professional duties. They are expected to adopt widely recognised practices and procedures. The time will surely come in every professional field, where those widely recognised practices and procedures will include the use of AI.
- One can think of many examples. The doctor who refuses to use an available AI tool to diagnose cancer is one. An accountant who fails to use an AI tool to check for fraud in a company’s books when undertaking an audit may provide another example. Then, what about an employer, who shuns the use of AI to ensure that all reasonable safety precautions are taken to protect its workers? Similar examples can be imagined in every professional sector and in every business, consumer and financial sector too.
- It seems to me that this prospect puts professional negligence lawyers, and perhaps tort lawyers in general, in a peculiarly interesting position. There will, of course, be many claims arising from the alleged negligent use of AI. I guess that lawyers will be working on difficult questions of the attribution of responsibility for losses caused by errors made by AI tools. At the same time, they will undoubtedly be faced with claims by those who suffer loss when a human, rather than a machine, advises them as to an investment or a financial decision, a medical diagnosis or taking a medication, building a bridge or wiring a power station, or so many other possible things.
- Now is the time, perhaps, to look even further ahead. The advancement of technology may force us to inquire as to the essential nature of a profession. If an AI can perform a surgical operation, design a building, or audit a set of accounts, can an AI ever actually become a professional or a member of one of our sacred professional associations? The law provides the social foundation for all our societies. When AIs are quicker and cleverer than humans, we will need to re-evaluate the infrastructure that the law provides for the delivery of advice and professional services. It will not be easy. But I would revert once again to my mantra. We must be guided by human values, justice and the preservation of a rules-based environment.
Conclusions
- You can immediately see that emerging technologies, AI and the even more remarkable tools that are in the pipeline, will change our lives as lawyers and judges. We will all need to understand clearly where the human adds value. I am sure that humans will always be needed to explain legal situations and solutions to human clients, and to try to make sure that automated legal work product accords with what a human would expect to be the outcome. I have hardly scratched the surface of automated decision-making. Perhaps I will leave that for another day.
- One thing is certain. The rule of law is a human concept. It is something that we will need to be vigilant to protect in our new automated legal environment.
- As I have said, training will be imperative. Ignorance will not be an option.
- There will, I expect, be whole new chapters in the books, if not whole new books (digital, of course), about (a) what is a generally available AI tool, (b) when using reasonable skill care and diligence requires the use of them and when it does not, and (c) when an AI tool or process has been shown to be sufficiently reliable to make it equal to or better than the work of human experts or professionals.
- This may all happen far more quickly than many expect. Though, as I have already said, things never develop quite as quickly as imagined, because technological uptake depends on the confidence that humans have in those technologies. Humans seem to have huge faith in the AI incorporated within their computers and mobile devices, but maybe less faith and confidence in sitting in the back of a self-driving vehicle.
- Predicting how things will turn out may prove far more difficult than we think. But I guess that one thing is for sure. Even if the professionals will be damned if they do use AI and damned if they don’t use AI, professional negligence lawyers will be in great demand – whatever happens.
- I will happily answer any questions you may have.
BAILII:
Copyright Policy |
Disclaimers |
Privacy Policy |
Feedback |
Donate to BAILII
URL: http://www.bailii.org/uk/other/speeches/2024/O7RCQ.html