How In-House Legal Departments Should Engage With Generative AI

Sept. 6, 2024, 8:30 AM UTC

The promise of generative AI has brought a mix of anticipation and uncertainty to legal departments. And for good reason—it has potential to fully transform legal practice.

This technology holds promise for greater efficiency and cost savings, the ability to bring more legal work in-house, and reduced time spent on lower-value tasks. Yet the familiar litany of risks associated with its use include inaccuracies and hallucinations, data privacy and security, confidentiality, and IP concerns.

Still, converts appear to be growing rapidly. Pressure to adopt generative AI as a capability will no doubt continue to grow as the technology improves, as it gains greater acceptance in the marketplace, and as other departments (read: finance) seek the allure of cost savings.

A 2023 Accenture study reported that nearly 100% of the work done by attorneys is exposed to transformation by generative AI, the only job category in the study to confront such significant change.

So, how is the general counsel’s office to respond? Deliberately. The very same care that distinguishes the legal department’s success in so many areas should also distinguish the department’s own engagement with generative AI.

However, moving deliberately doesn’t have to mean moving glacially. After all, AI may not replace lawyers, but lawyers who use AI may replace those who don’t.

We offer three factors legal departments ought to consider in adopting generative AI for their own practice.

What are you trying to accomplish by using generative AI?

And what pain points and opportunities are you trying to solve for? Answering these questions should be a team sport. Engage your group to identify the most meaningful use cases. As the tech world says, love the problem, not the solution. In doing so, you are likely also to surface your team’s concerns and uncertainty over using the technology.

Gathering input will not only lead to better thinking, including on how best to safeguard around risks, but also lead to greater buy-in for whatever use cases are pursued. In fact, some companies are going even further and surveying their outside counsel to leverage their experience with use cases as well.

In identifying use cases, it is important to balance the gains to be achieved against the effort to onboard, integrate, and validate the technology for your purposes.

What is the most appropriate technology to meet your needs?

Begin with an understanding of the technologies available. Determine what others are using within your industry, your outside law firms, and through consultancies.

Next, consider whether you want to work with generic models designed for all users or those intended for legal use cases. The latter includes existing legal tech providers incorporating generative AI into their platforms, legal information providers developing generative AI capabilities on top of their knowledge bases, and large law firms creating their own capabilities.

While the technology evolves, it may be worth engaging in a mix of approaches to hedge your risk. Ultimately, however, and as consulting firm Deloitte emphasized in its guide for engaging with generative AI in corporate legal departments, “the real potential lies in creating legal specific capabilities, which could be achieved by enhancing the data input into generic models, or by creating dedicated legal models.” In our experience, we have seen this potential best realized by focusing multi-purpose models on highly targeted uses to maximize quality, consistency, and control.

And don’t be afraid to experiment with whatever models you pursue. Engage in proofs of concept with different providers. In so doing, don’t just focus on the accuracy and reliability of the tool tested, but understand the tool’s data privacy and security implications.

Also, how does the technology treat the inputs you provide to it; be sure to understand whether it is safe to input your IP into the tool or whether you may unintentionally expose that IP for someone else’s use.

Finally, assess whether the technology will be able to transition seamlessly into the workflows of your team. Innovative technology that doesn’t integrate with the way your lawyers work is a cool demo that just never gets applied.

How should you onboard the technology you obtain?

Just as you may be doing for other departments in the company that use AI technology, you will want to set up a policy for who can use the tools as well as operating rules with any limitations on data usage. The operating rules should reflect considerations of the data privacy, security, and IP issues raised above, and the purposes of the tools. Ensure the policy is pragmatic so that it allows for innovation with appropriate guardrails.

You also will want to test the tool to validate its continuing accuracy as well as any biases that surface and monitor for any other concerns that might arise.

Training and educating your colleagues about the technology and the policy is important so they understand the best way to implement the tool and any limitations. Keep in mind the American Bar Association’s recent pronouncement that lawyers should maintain “relevant technological competence, which requires an understanding of the evolving nature” of generative AI.

Finally, no one is better at anticipating that things will go sideways than the legal department. There may be cost overruns, delays in onboarding, snafus with the technology. The key to resilience is transparency with your legal department colleagues, with procurement, with finance, and with any other stakeholder in your process.

Keep them apprised of the rationale for your use cases (factor 1, above), the technology you are seeking (factor 2), and your intended onboarding process (factor 3). In this way, you will have less explaining to do when necessary and more investment in your intended outcomes.

The journey ahead is an exciting one. With each deliberative step, you have the potential to fundamentally change how you will support the company’s mission while safeguarding its value.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Allen Waxman, of counsel at DLA Piper, has been a trial lawyer, national counsel in mass tort cases, former general counsel and head of litigation at Pfizer, and CEO of a dispute resolution organization.

Barclay T. Blair is senior managing director of DLA Piper’s AI innovation team, advising the firm and its clients on AI applications while driving innovative use of emerging technologies.

Danny Tobey chairs of DLA Piper’s AI and data analytics practice, where he helps companies adopt AI in a safe and compliant manner, and works with prominent global companies on AI governance.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Jessie Kokrda Kamens at jkamens@bloomberglaw.com; Alison Lake at alake@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.