Write on. The KA blog. 

Bookmark us and please pay us a visit now again to keep up on all the goings on at Kerwin Associates.  
Janel Thamkul, Deputy General Counsel at Anthropic

Janel Thamkul, Deputy General Counsel at Anthropic

June 20, 2023

Please share your background and experience in practicing law with a focus on AI. How has your career evolved as this technology has advanced? I've been counseling AI and machine learning research and development and product deployment for over half a decade. I led the product counsel team at Google supporting Google AI Research and am currently Deputy General Counsel at Anthropic, one of the frontier labs developing large language models but with a core focus on AI safety. The metaphor I'd use to describe the evolution of my career in this space is that of a ripple. At the center--and what drew me to AI before it was popular--is my passion for navigating uncharted legal territory. Over time, my scope and skillset has expanded beyond pure product counseling, but my love of emerging technologies remains at the core.

What are some potential benefits and challenges that lawyers and law firms should be aware of when using these tools? I think the limitations of generative AI are pretty well known by now, but certainly, these models are subject to hallucinations and can output incorrect information, so people need to exercise caution and judgment in how and when they use the technology. It can be an incredible assistive tool that enhances productivity, but it isn't a free pass to stop exercising critical judgment and adhering to professional responsibilities.

How can AI tools be used by in-house legal teams ethically and effectively? What are some ways that AI has changed your legal practice? First, assuming your team is using a third party service, you need to understand how the third party provider is using the data you submit to their services. We've seen enough headlines about employees submitting sensitive or proprietary company information into third party services, which reproduce that information to other users because the service trains its models on user inputs. Legal teams need to vet the service provider and product terms carefully to ensure the protection of confidential company and client information. Second, as I mentioned above, teams should have some baseline understanding of how the technology works and what its limitations are. At least for now, these tools should enhance your workflows, not replace the need for legal expertise and judgment. In other words, maintain a "human-the-loop" for quality assurance. This technology is exciting but still relatively new in terms of the broad consumer use we're seeing today, and there may be gaps we're not aware of yet. Finally, you should think about whether to disclose when or how you've used AI in your practice. I believe that some ethics boards have issued guidance requiring counsel to disclose to clients when they've used AI. Of course, this doesn't make sense in all cases, but it's something to keep in mind depending on how you use these tools. I've used Claude, Anthropic's large language model, to help me organize my thoughts more precisely or revise the tone of something I'm drafting, which allows me to spend my time focusing on more strategic, higher leverage issues. Some of my colleagues use Claude to help with summarizing news articles and research publications as well. There are so many productivity use cases I haven't tried!

What excites you the most about the future of AI? Any general thoughts you would like to share? I think AI will transform the way we work, communicate, create, and spend our time, and I'm excited by the potential for AI to democratize access to language/communication, technology, and more. My parents were immigrants who became entrepreneurs in the United States. I know they struggled to navigate an unfamiliar culture and language. Imagine if AI could have helped them write a business proposal in grammatically correct English or understand American cultural concepts--the opportunities that could have been unlocked by this technology! Similarly, my professional interests have always been in the humanities (art, fashion, law). To think that I could translate an idea using natural language into code and develop my own applications is something I never imagined I'd be able to do without going back to school. To be fair, I do think there will be (and already is) disruption in certain industries, and developers, users, governments, and society need to collectively develop solutions to mitigate negative impacts, but I don't think it's a zero sum game. Just as there are benefits and limitations to the technology, there will be positive and negative impacts that result from it, and I'm thrilled to be part of the team at Anthropic where our core mission is safe and socially beneficial AI.

Recent Posts

No items found.