Write on. The KA blog. 

Bookmark us and please pay us a visit now again to keep up on all the goings on at Kerwin Associates.  
Sarah Tierney Niyogi, former General Counsel at Scale AI

Sarah Tierney Niyogi, former General Counsel at Scale AI

June 20, 2023

Please share your background and experience in practicing law with a focus on AI. How has your career evolved as this technology has advanced? I'm a legal leader and General Counsel, having worked at Scale AI, Plaid, and Dropbox. I graduated with a BS in Computer Science from the University of Texas at Austin and a JD from Stanford Law School. Recently, I've switched to the investing side as a Partner at Recall Capital, focused on pre-seed investments in AI applications, enterprise SaaS, and fintech.  

I've been interested in AI since I was an undergraduate computer science major working on AI & natural language processing research projects in a professor's AI lab. I kept an eye on the technology over the years and jumped at the opportunity to join Scale AI as outside General Counsel in 2018 and then full-time as General Counsel in 2019. From a legal perspective, I think counseling data science teams over the years has been a great preview of some of the key legal issues in AI such as provenance, acquisition, ownership, and licensing of data, intellectual property issues, privacy regulation and limitations on data use, and confidentiality/security issues. Now, we all have a front row seat to the regulatory debates that will further shape the constraints on AI development. Attorneys are becoming more strategic than ever, working as critical thought partners throughout the AI development life cycle as the legal landscape evolves and becomes more complex. Attorneys are uniquely positioned to help their companies and clients get involved in regulatory discussions and shape the regulatory landscape. I don't think there has been a more exciting time to practice law in the technology space.

What are some potential benefits and challenges that lawyers and law firms should be aware of when using these tools? We are just at the cusp of seeing AI technologies built-in to some of our legal workflow tools, whether CLM systems, research systems, or productivity tools. Stand-alone tools like ChatGPT can be great tools for generating outlines for memos and presentations, drafting first iterations of emails and memos, brainstorming for research, and more. Of course, it's important to think through what information you provide ChatGPT or other AI tools based on your agreement with the provider, and whether you have the right confidentiality, privacy, and security in place with a provider. In addition, generative AI and LLM-based tools have other challenges like hallucination, lack of attribution, and other reliability issues. When it comes to legal advice, it's important to do research with trusted sources and verify a model's output before including it in deliverables.

How can AI tools be used by in-house legal teams ethically and effectively? What are some ways that AI has changed your legal practice? As in-house legal teams, it's important for us to lead by example in vetting our AI providers to ensure confidentiality, privacy, and security for corporate data. We also have an ethical duty to ensure the accuracy and reliability of the legal advice we provide to clients, whether or not we've used AI to accelerate our work. For me, one of the most useful aspects of ChatGPT is the assistance in brainstorming. If I want to improve my company legal training, I can ask ChatGPT what should be included, and then iterate through prompts to get more ideas and generate a first draft of slide content. I can even ask ChatGPT for a privacy or patent law related joke!

What excites you the most about the future of AI? Any general thoughts you would like to share? It feels like in-house legal teams have been underserved by software for a long time. I'm excited that given the easy applicability of LLMs to document use cases, that we'll see real advances in AI contract drafting, AI-enabled legal research, and more. As a base case, I expect we'll be able to drive more efficiency in our legal departments, and automate more of the repetitive work. But, I'd also love to see research tools and co-pilots that help in-house lawyers do more of the specialized counseling and transactional work that we might have previously outsourced. You can imagine more specialized tools that could help transactional teams in-source M&A or real estate work. In the long run, I think we'll see a continuation of the trend of bringing more legal work in-house and reducing spend with law firms (particularly with counseling and transactional work). AI should enable us to work more strategically, more efficiently, and more expansively.

Recent Posts

No items found.