The Impact and Value of AI for IP and the Courts – a speech by Lord Justice Birss
Civil Intellectual Property Enterprise Court Intellectual Property List Speeches
The Rt Hon. Lord Justice Birss, Deputy Head of Civil Justice, delivered a speech at the Life Sciences Patent Network European Conference in London on 3 December 2024.
The speech, entitledThe impact and value of AI for IP and the courts, can be read below.
- Good evening and thank you for inviting me to the Life Sciences Patent Network European Conference. I'm here to consider with you the impact and value of AI for intellectual property and the courts.
- It is extraordinary to see how much has happened in relation to artificial intelligence since ChatGPT burst onto the world stage about two years ago. Of course, AI systems of one sort or another have been around for a long time, and in the scientific field which this conference is about many of you will, I imagine, be familiar at least with proposals to use AI in all kinds of ways. The development of the Alphafold 2 system by DeepMind, released in 2020, which predicted protein folding better than any other technique at the time will not have escaped your notice as people in the life sciences.
- But before I get into detail, I would like to take a step back first to put AI into a bit of context, in order then to look into the future.
Defining artificial intelligence itself is not an easy task.
- Many definitions I have seen recently are, I suggest, rather inadequate. I recently led a UK delegation to the G7 Justice meeting on Justice in the age of AI. At that event one of the definitions I heard was that the term AI refers to machines which can do tasks done by people. However, the problem with that definition is it depends on when you're asking the question.
- One of my favourite stories relates to Annie Jump-Cannon. She worked at Harvard at the beginning of the 20th century. She and her colleagues did what we now call astrophysics. In fact, Annie Jump-Cannon is responsible for the spectral classification of stars which we still use today. She was a member of a group known at the time as the Harvard computers. The job was also described as being to work as calculators. Of course, the point is that at that time computation and calculation were tasks performed by people.
- I am old enough to remember a time at school when there were no computers and indeed no pocket calculators. In fact, pocket calculators came to my school a year or two before my O levels. The education system was in uproar in relation to maths and arithmetic about how to deal with these devices and whether they were acceptable to be used in exams. I did an exam in which calculators were not permitted, and another exam in which they were, in the same year – 1980. So, to describe artificial intelligence simply as a machine which can do a task done by people, does not, I suggest help very much.
- Really what that definition is referring to is that AI systems seem now to be capable of tasks which many people perhaps thought could never be done by machines. But while that may reflect where the definition comes form, it does not help either.
- Today, when we refer to AI in the context of a discussion like this, we mean machine learning systems which are capable of assimilating very large quantities of data. They build multi-dimensional models based on the characteristics of the data, to make probabilistic predictions. The AlphaFold system's ability to model protein folding seemed extraordinary in 2020, at least to those who had been labouring with molecular biology for many years. That is just like the ability of large language models today to produce effects which create a facsimile of an understanding of human concepts, merely by predicting the next likely word in a sequence, in a given context. Which is essentially what ChatGPT can do.
AI has potentially transformative implications both for justice itself, and in intellectual property, and I will take them in that order.
- Large language models appear to be able to produce high quality summaries of significant volumes of text. If we could harness that in the justice system one could imagine, for example, a system in which most sets of papers which were to go before a judge to be pre-read before a hearing would arrive with a 1-page summary of what the case was about. This would be in order to assist the judge and speed up their ability to prepare the case.
- We already have non-AI systems which do this – we call them judicial assistants, in the Court of Appeal. These young lawyers prepare summaries of all our cases to help us. They benefit from the experience of working in the Court of Appeal for a year. The court system as a whole could not afford to provide that kind of assistance to the judges that make up the majority of judges in England and Wales – the district judges, circuit judges, deputy DJs and recorders – but the fact that we are prepared to do this in the Court of Appeal indicates that we think a summary of that sort has a value in helping the preparation task of judges.
- It is therefore not difficult to imagine a possible use case for the text summarization ability of AI of this kind in the justice system. The AI is not there making the decision. The judge would still read into the case in the normal way and then hear the parties, but their reading-in will be more efficient than it would otherwise have been.
- There are other tasks of that kind where AI may have utility. We know for example that technology assisted review (TAR) is already practised in civil justice and has been authorised by courts for some years. These TAR systems were based on machine learning techniques. Senior lawyers in the team would identify relevant documents so that the machine could learn the difference between relevant and irrelevant material. Then the TAR system would be applied to the enormous store of documents in a case. It would identify, for a given degree of probability, which documents were relevant and should be disclosed and which were not.
- Looking further into the future, one could imagine that AI may very well be able to assimilate much larger quantities of data than a normal human judge. One could then be faced with the situation in which an AI system might be a better decision maker than a human being in those circumstances. I should make clear that I do not believe we're anywhere near that yet, but from what I read in the literature of the capabilities of AI, I would not like to bet against the idea that this capability will arrive in a not-too-distant future.
- The question will then become an important ethical and human rights based one – in which we need to decide whether there are decisions we are prepared to have made by AI, and which decisions should remain the preserve of human beings. One could imagine for example that a decision relating to children and whether someone had committed a crime might be one where we wish to maintain human decision making. On the other hand, one might imagine that a large number of small money claims or some other similar kinds of case, might be more efficiently done by AI, in the first instance. There could then be a right of appeal to human judges after the event.
- The question whether a decision which could be made by AI should be, will be determined by ethics and human rights considerations, as I have said. At the moment, the capability is not there, but I rather think that might change in future.
- Of course, the question of bias in the training of large language models is important, particularly when one is thinking about using AI for decision making. However, that issue might be addressed by appropriate training data and by monitoring. Another issue is whether the decisions of an artificial intelligence are explainable – however I have seen literature in this area which suggests that it may be possible in future to explain why an AI system has reached the decision it has.
- Even further – what about two AIs? One is designed to analyse material and make a decision. Another different AI is designed to produced written judgments setting out cogent reasons for a given conclusion with a given set of data. In such a case, if the reasoning was cogent, would we tolerate that?
- At this point I would like to mention the guidance we have issued in the judiciary of England and Wales relating to AI. The two important things to note are, first, to beware of entering private data into public AI systems. Indeed no one should do that. And second, to be clear that the person who is producing a document takes full responsibility for its contents. A lawyer producing a document to court which contains hallucinated case citations only has themselves to blame.
- And finally, I'd like to mention Garfield. Garfield is very new. It appeared on the Internet a couple of weeks ago. It is in effect an AI law firm, in which a single solicitor is in overall charge of what this Garfield system does. It interacts with a litigant using natural language, spoken or written, and guides them through the process or bringing a debt claim in the county court. It prepares the Claim Form and drafts the Particulars of Claim for them. It will help them comply with the pre action protocol and, assuming the litigant wishes to bring proceedings, it will file the Claim Form and Particulars automatically at court, using the API system already available to bring claims electronically. If the defendant responds by e-mail, Garfield will manage that process. It can advise on the implications of what the defendant says. If the claim is not defended, it will obtain default judgment for you. Interestingly Garfield has insurance, and the startup behind it is in close contact with the Solicitors Regulatory Authority about regulation.
- It is an example of what I have referred to as the democratising effect of AI in relation to access to justice.
- It is notable for example that ChatGPT, while of course inaccurate in many important ways, is still a better legal advisor for an individual who knows nothing about the law and is looking for help on the internet, than a Google search.
AI in the context of intellectual property, particularly patents.
- There are two questions:
(i) What are the issues raised by attempts to patent methods which involve AI?
(ii) What are the issues raised by the use of AI in the process of inventing things?
- Each topic involves one of two cases in which I have been involved as a judge. As a result, I will not make any comment on them which is inappropriate.
- Taking (i) first – attempts to patent methods which involve AI.
- A recent case in the Court of Appeal is the Emotional Perception case which the court decided that AI machines are subject to the same law relating to the patentability of computer software as other, what one might call, conventional computers. I don't know if that decision is going to the Supreme Court, so I do not propose to say anything else about it.
- There are other issues. For example, say you have built an AI system which does something new and useful – perhaps it controls drug dosing in patients. If you apply for a patent on it, will there be a need to disclose training data in that application? Or if not is there a risk of insufficiency?
- My answer is this is to – remember "the bugs". In the 20th century we solved a related problem in the context of products made in culture by organisms like bacteria and cell lines, which were difficult to reproduce. A deposit system for strains and clones was set up under the Budapest Treaty. Maybe we need a similar AI deposit system for data or even algorithms?
- Turning to topic (ii) – using AI to invent.
- The relevant case is the Thaler case, which has now been decided by the Supreme Court. It stands as authority for the proposition that in the UK a machine cannot be an inventor of a patentable invention. This is a result which most courts and patent offices around the world appear to have arrived at from consideration of their laws.
- One of the potential implications of Thaler, which I think is relevant in the life sciences area, is to work out where the boundary is between work done by a human being who would count as an inventor, such that the patent could be applied for and granted, and work done by a machine, which would not. Although some have suggested that Thaler will not produce a difficulty of this kind, because there will always be a human involved somewhere, I must say I remain sceptical that it is that simple. For example, last year I visited Berkeley to discuss current issues in IP law under the auspices of WIPO. At the event we were shown a demonstration of an entire automatic materials development lab, set up to discover new materials, using a combination of artificial intelligence and robotics. It is quite difficult to see which human being could credibly be called the inventor of any of the new materials that such an automated laboratory might produce. It happened to be making and testing new materials as I have said, but it could just as easily have been a life sciences laboratory of some kind, and one would be faced with the same difficulty.
- One thing I think we can say is that the law is now clear that, unless a human can be credibly called an inventor, the resulting invention is not patentable. In our careers we have all seen cases in which the manager of the laboratory was named as one of the inventors on a patent. I must say I've always doubted that they really were an inventor, it was done for politeness, and it just didn't matter. It might now matter a great deal.
- There has been speculation that AIs will generate an effectively infinite list of prior art, to prevent anyone in future from patenting anything. However, at least without seeing an example, I find it difficult to get very excited about that. Pharmaceutical patent law has been able to grapple with the situation in which a genuinely useful compound – with genuinely unexpected properties – turns out to be buried somewhere inside a giant Markush formula in the prior art.
- Another issue is AI and obviousness. There are two issues. The first is its use as a tool. I do not see any conceptual problem with deciding that the person skilled in the art is someone using an AI machine. There might be difficult problems of proof about what would or would not happen at any given time but – while not belittling them – I do not regard those as problems with the law itself.
- The second issue is whether AI will make everything obvious. My difficulty with that is that if it is really true – then how marvellous! Can we expect a plethora of new drugs within six months? There is no sign of that.
- However, I suppose there could be a difficulty if the result of the existence of AI systems means in effect all new medicines have now become technically obvious today – even though it is just not commercially obvious to take a given product forward until 2030. If that were true, it would not allow the patent applied for in 2030 to satisfy the current law. If this really did happen then one might very well have to re-examine the patent system as an incentive to invest in innovation and thereby disseminate human knowledge.
- So, I will end there – there is a lot going on and a lot to think about – but I think we can agree that AI is going to lead to many interesting developments, challenges and changes. We need to get it right, but if we do then there is every prospect it will be good for society as whole.
Thank you.
BAILII: Copyright Policy | Disclaimers | Privacy Policy | Feedback | Donate to BAILII
URL: http://www.bailii.org/uk/other/speeches/2024/WNYFO.html