Merger and acquisitions where the target’s value derives from its software or intellectual property commonly have a separate diligence workstream that focuses on open-source compliance, security, or IP.
But with the legal and regulatory challenges that come from using generative AI, acquirers should consider a new diligence workstream that focuses on generative AI issues. Acquirers in all industries and sectors also should review their approaches to diligence given the broad potential use cases.
Use of generative AI tools has exploded because they can help companies improve existing products and services, speed development of new ones, and drive other efficiencies. With those benefits, however, implementing generative AI carries certain risks.
Confidential information or trade secrets used as inputs or prompts for generative AI tools may become public information, or otherwise lose their status as protected information or trade secrets.
Generative AI tools may train on third-party content that raises questions of fair use versus proprietary rights, and may theoretically include in the output information, materials, source code, and other content owned by a third party.
In cases where developers use generative AI to write software, the output might include source code licensed under one or more open-source software licenses that impose certain obligations on open-source code users, or produce source code that includes viruses, malware, malicious code, and security vulnerabilities.
In addition, some forms of IP protection, such as patents and copyrights, may be unavailable for machine-created output.
Because of these risks, acquirers should conduct due diligence on a potential target’s use of generative AI. Diligence should include a review of the target’s policies and practices for generative AI use.
Acquirers should also ask whether the target has a written policy governing its generative AI use, whether the policy addresses mitigation of the risks described above, how target monitors and ensures compliance with the policy, and whether there are any gaps of potentially significant or material given the target’s use of generative AI.
While policies can be formal or informal, the diligence exercise should focus on determining the fact of compliance and the target’s level of pre-acquisition diligence in intelligently mitigating risk. The acquirer also should obtain a comprehensive list of the generative AI tools that the target uses.
Similar to open-source due diligence, a review of the list should assess:
- The risk imposed on target’s proprietary information, either because the target’s input is shared with third parties or because prompts or output generated by the tool are used to fine-tune the model for others’ benefit.
- Whether the terms provide any remedy if the output contains any harmful, illegal, or infringing content.
- Any party’s potential rights in the generated output.
- Whether the target’s use of settings and configuration for each tool provides appropriate procedural or technical safeguards to protect its confidential and proprietary information.
- Whether contractors, consultants, and vendors are required to comply with the target’s generative AI policies and procedures.
If the target has a product or service that leverages AI algorithms or large language models, the acquirer should consider sources and type of data used, target’s ownership or rights to use such data for testing and training AI applications, whether the content used in training implicates IP rights of third parties, whether web-scraping was used to aggregate data or content, and target’s rights to use personal data to run, test, or train the AI algorithm.
Addressing potential bias includes asking whether the target has taken appropriate steps to mitigate risk of claims by target’s customers using the AI solution due to bias in the AI technology.
For example, if the target makes available an AI solution used in making employment decisions, has it taken steps to mitigate or eliminate biased outputs? Does the target regularly audit the outputs of its solutions for potential bias?
Particularly in the employment context, AI solutions can be a source of litigation and liability if there is a risk of biased output, which acquirers can assess with focused and fulsome diligence.
Acquirers should ask how the target validates its AI algorithm and whether recommendations from the AI algorithms are reviewed by a human prior to their execution. For example, does a human review AI recommendations that affect individuals’ health, medical, financial, or employment outcomes?
Generative AI and the means available to mitigate their potential risks continue to evolve at a breathtaking pace. All users of such technology (and those that may acquire them) should continuously monitor the ever-changing legal and technical generative AI landscape so they can understand the risks involved and the best ways to avoid them.
Just as AI is driving change in business operations and product development, it’s also an impetus for evolving standard approaches to due diligence. Acquirers failing to adopt changes in their approaches may find themselves assuming more risk than expected in an acquisition or being unable to maximize the deal value.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Mark Lehberg is partner at DLA Piper, focused on business and legal counseling in connection with complex business and technology transactions.
Christopher Stevenson is of counsel at DLA Piper, with focus on e-commerce, information technology and intellectual property protection, licensing, and commercialization.
Kamla Topsey is of counsel with DLA Piper, focused on technology-based transactions, licensing, and acquisition of intellectual property assets.
DLA Piper’s Victoria Lee and Gina Durham contributed to this article.
Write for Us: Author Guidelines