- FTI Technology’s Tom Barce discusses “interactions” governance
- New discovery rules will be needed for AI-generated data
It’s happening. Leading organizations are forming committees, budgeting resources, performing research, conducting pilots, and beginning early implementations for generative artificial intelligence.
Many legal teams are under pressure from leadership to be among the early adopters amid efforts to gain value from these advancements. As new applications enter the market, there’s an expectation that teams will quickly onboard tools, and in turn, realize measurable gains.
But there are also concerns about the risks, and legal teams are torn between pursuing innovation and maintaining a strong risk management posture.
Generative AI’s entrance into enterprise environments has created a new dimension of company information and potential liability that many organizations aren’t quite sure how to handle. Information governance controls are now required for an uncharted category of records—namely “interactions,” which are logs of prompts used to query AI tools. New discovery rules and processes must be established for data categories that haven’t been discoverable, including interactions and company documents created entirely by a machine.
Legal teams must now address governance and compliance, as well as e-discovery readiness, when implementing generative AI. They also must proactively map out policies for how employees may interact with these tools and where the underlying interactions are stored. This way, they can be properly retained, monitored for compliance purposes, defensibly disposed, and preserved as needed for future legal discovery requirements.
While organizations are working toward generative AI use cases, more risk-oriented controls need to be considered in parallel:
- What happens to generative AI interactions within the organization?
- Where are interactions stored?
- Are they being retained or disposed of?
- Are they being monitored?
Organizations should monitor and manage the use of AI tools in the same way that email, chats, and other established communications require oversight to catch misuse or violation of varying regulatory obligations. Legal teams can:
- Conduct information security and third-party risk management audits to confirm the extent that AI tools comply with company policies and regulatory requirements for other technologies, systems, and providers.
- Establish contractual controls—such as indemnity clauses or stipulating that no client, confidential or sensitive data may be used in large language model training—to provide general protections while teams sort through unknowns within the technology.
- Implement abuse monitoring capabilities to notify compliance teams if employees are making suspicious or inappropriate queries of the AI tool while balancing retention, protection, and access of data that may be monitored—especially including proprietary or personal information.
- Review access controls for company records. Many off-the-shelf AI tools provide users with widespread and unfettered access to query company documents and information. Every application in the system must be checked for access controls to be sure that queries of the AI system won’t result in users viewing information they shouldn’t be able to access.
- Label policies for documents and information categories that must be treated with varying levels of confidentiality or protection.
Employee interactions with and output from generative AI tools are creating a data set that could come into scope in e-discovery. For example, generative AI tools store artifacts that may introduce e-discovery implications when data related to or from the tools intersects with a dispute or investigation.
When prompts and interactions are stored, they become a potential company document, communication, or record that could be considered as evidence in litigation or a regulatory investigation. The number of tools creating these new or unknown data artifacts, with limited visibility or accessibility, is a potential Pandora’s box.
Just like with other forms of emerging data, it will be challenging to preserve these artifacts, defensibly collect them, process them into an e-discovery tool, and render them useful for analysis and review. It also will require technical expertise. Legal teams may also need to irrefutably distinguish between human-generated content and AI-generated content when entering these items into evidence in a legal matter.
Legal arguments about interactions with generative AI, their qualification as communications, and whether they may be subject to discovery will be complicated, if not contentious. Regulators and litigators quickly identified modern forms of communication such as Slack, Zoom, Teams, various mobile messaging apps, and other ephemeral messaging platforms as relevant sources of evidence. It is also plausible that as generative AI becomes more mainstream, data from these systems will likewise be sought to support fact finding throughout the course of discovery.
Many organizations are currently at varying phases of generative AI implementation, with most at either the proof of concept or pilot phase. Functionality and controls change constantly, so organizations need their AI governance and e-discovery readiness programs to be built for adaptability and continual testing.
Until we reach a high degree of certainty with this technology, generative AI tools likely are or will create data that’s not being properly controlled. Organizations must activate the appropriate analyses and put the right controls in place. Meanwhile, it’s just as probable that some of those controls are yet to be conceived.
Considering the typical 12- to 24-month lag time between the adoption of emerging technology and its appearance in relevant litigation or investigations, the clock is ticking. What seems brand new today may be the linchpin in tomorrow’s court case.
Whether an organization currently sanctions the use of generative AI for business, potentially relevant evidence from generative AI has probably already been created.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Thomas Barce is managing director at FTI Technology, with more than 25 years of advisory experience directing and managing information governance, electronic discovery, and litigation support initiatives.
Write for Us: Author Guidelines
To contact the editors responsible for this story: