Write on. The KA blog. 

Bookmark us and please pay us a visit now again to keep up on all the goings on at Kerwin Associates.  
Elspeth White, Deputy General Counsel at Google DeepMind

Elspeth White, Deputy General Counsel at Google DeepMind

June 20, 2023

Please share your background and experience in practicing law with a focus on AI. How has your career evolved as this technology has advanced? I’m a former computer scientist who has always been passionate about machine learning. I’ve been lucky to have worked with AI technologies throughout my legal career.

In my current role (Deputy GC at Google DeepMind) I help our researchers work toward their goal of improving AI and using it to advance science and benefit humanity. I lead a team of commercial and IP lawyers who partner closely with our researchers and advise on the various questions and needs that come up in their work. I also lead our Legal Operations team who keep our legal team working at maximum efficiency.

I’ve spent over 15 years working with AI in various legal roles. These range from drafting and prosecuting patents in the AI technology space, both in private practice and in-house at Google, to creating and executing comprehensive IP strategies at Google and then at X, Alphabet’s Moonshot Factory, to product counseling and commercial counseling various projects at X, to my work at Google DeepMind today. This combination of experiences helps me counsel clients holistically.

Some of my favorite work has been at the intersection of AI and real world problems – working with various AI robotics projects at X, like Everyday Robots and Intrinsic, working with X’s Tidal project (sustainable fish farming) and helping DeepMind share its AlphaFold work that revolutionized protein folding predictions with the world.

What I love about AI is that at its heart it finds and relies on patterns and connections that are always present but not always understandable. When this is working well, it feels almost like magic. Over the years, I’ve seen AI technology I work with advance from single purpose classifier systems that could identify particular things they’d been trained for, to more general classifier systems, to generative AI and the foundation models being developed now.  

As the models powering AI have become more capable, they have more and more possibilities for real-world impact in so many spaces – overall human productivity, science, medicine, climate, etc. The technological improvements and potential for more and more positive impact come with more and more interesting legal and societal questions, and the need to be even more thoughtful and responsible in AI’s development. Legal teams have an immensely important role to play in this: from helping the teams they work with understand the laws to identifying ethical and responsibility considerations, to helping shape future laws that will govern work in this space.

What are some potential benefits and challenges that lawyers and law firms should be aware of when using these tools? We’re starting to see tools with functionalities and abilities that have the potential to make our practices significantly more efficient and better. From a second pair of eyes to do a first pass at document review or contract review to tools that can help with common drafting tasks, there are many ways AI tools can make legal practices better and more efficient.

However, it’s important to remember that we’re in the early stages of AI and the tools still have a long way to go. Users of AI systems typically don’t get insights into why the AI is producing the output it is. Because of this, it’s important to be thoughtful about where AI is used, how it was trained, what its limitations are, and where you need to have human oversight before acting on AI outputs.

Just a couple of examples:

  1. AIs are only as good as the data they are trained on. If the training data used is biased, inaccurate, out of date, etc. then the outputs of the AI will also likely be biased, inaccurate, out of date, etc. Previous uses of AI in the legal system, for example, in predicting likelihood of recidivism, have shown how prevalent and insidious bias can be.
  2. AIs need to be trained on data relevant to what you are asking them to do. For example, an AI trained to identify or draft clauses in a particular US state’s contracts is unlikely to catch the nuances necessary under laws of other jurisdictions.
  3. Generative AIs can produce inaccurate outputs. Generative AIs that output text generally work by predicting the most likely next word in their responses. This is often based on the statistical distribution of words in the data they were trained on; not necessarily the truth of what they are saying. This is true even when they produce significant amounts of detail. We all got a great reminder of this recently, when an AI generated legal brief made up the cases it cited.

More generally, as more and more tools are developed, we have a chance to rethink our legal work – where do we spend our time, where can we be more efficient, and what could AI give us space to spend more time on. I hope we also use this as a moment to reimagine what advising looks like and how AI can help us achieve that (with the proper oversight, of course) – rather than just rote swapping out manual processes with AI ones.

How can AI tools be used by in-house legal teams ethically and effectively? What are some ways that AI has changed your legal practice? AI tools aren’t new – they’ve been in the backend of various legal tools for a while. Historically, they’ve served more of a classification function – identifying outlier entries in bills, flagging deviations from expected terms in contracts, identifying documents for closer review in document review, etc. rather than generating new content themselves.

While these classification tools can provide a significant productivity boost and help legal teams focus on the most important things, it’s important to not overly rely on them. Having a human in the loop to take the output from the AI classifier and bring in any other needed context before making a decision is an important part of using these effectively and ethically. These tools are only as good as the data they were trained on and the information they have; they will sometimes make mistakes. Having a human evaluate the outputs is crucial for helping catch those mistakes.

As AI tools become more powerful and move from these classifier systems to more generative AI type tools that can create content on behalf of lawyers, the potential productivity boost is even higher – whether it be writing e-mails, drafting initial drafts of documents, etc.  

However, the need for human review is even stronger with these types of tools. This review needs to be independent, thoughtful human review. The AI generated legal brief I mentioned above is a great example of this. Merely asking the AI if the cases it cited were real was not enough, as the AI generated response was that they were real (they were not). Anyone who has spent some time arguing with an AI chatbot – I’m sure I’m not the only lawyer who finds this an amusing way to spend some time – knows that their answers aren’t always grounded in logic or truth.

What excites you the most about the future of AI? Any general thoughts you would like to share? I’m incredibly excited to see the breakthroughs that AI is going to bring to society in the coming years. We’ve already seen such amazing things – as one example near and dear to my heart, our AlphaFold system solved a 50 year old grand challenge in biology. The potential impacts from that solution in biology, medicine, sustainability, etc. are just beginning to be explored.

One of the things I love the most about AI is the wide breadth of problems it can be applied to – anything with complex underlying patterns and connections is fair game, and as the underlying models become more advanced and applied to other longstanding problems, I cannot wait to see the beneficial impact on the world.

Of course, for this technology to have the positive impact we want it to, it’s critical to develop it thoughtfully, and so this also means that it is an important time for lawyers to advise on not just the legal issues surrounding the technology but also ethical and responsibility questions.

Recent Posts

No items found.