FTI Consulting’s Sophie Ross says that GCs say they are unprepared to address their concerns about AI, but regulations like the EU AI Act can serve as roadmaps for safe, effective AI use.
Legal department leaders are increasingly exploring how generative artificial intelligence can increase efficiency and innovation. However, gaps between the practical, technical aspects of applying generative AI; the best practices for doing so; and abstract regulatory guidance make it difficult to balance rapid innovation with responsible risk mitigation.
As in previous cycles of technological disruption, this tension surrounding AI raises the question of whether regulation helps or hinders progress.
With any technology, and AI is no exception, there are challenges to achieving regulatory compliance when launching new technologies.
The first is simply understanding the rules that apply across different jurisdictions and how they may overlap with other laws. For example, does a company’s compliance framework for a data privacy law in one state undermine its antitrust compliance framework or government reporting requirements?
Add AI to the equation and the lines are blurred even more, as AI implementation may introduce issues across any number of corporate activities, in addition to requirements set by AI-specific laws emerging throughout the US and the world.
Compliance Guardrails
Once organizations map this complex web of overlapping requirements, there is the task of implementing policies and procedures to serve as compliance guardrails. This can be challenging, particularly as regulations attempt to keep up with the rapid pace of technological change.
Some may see these challenges as barriers to innovation, but organizations can leverage compliance requirements as a sense check on potential risk—and a roadmap for responsible innovation.
For example, The General Counsel Report 2025 revealed a majority of chief legal officers are cautious about their organization using generative AI, acknowledging the potential for pitfalls and the need for strong governance.
One general counsel interviewed in the study said, “New technology creates risk and then the regulatory response initially complicates that risk because the regulators are unfamiliar with the technological developments.”
In the study, respondents identified more than 15 unique areas of concern regarding the use of generative AI within their organizations. Security was at the top of the list, but GCs also named explainability, defensibility, potential for new types of litigation, creation of harmful content, regulatory issues, bias, ethics, data privacy, and more. Additionally, 85% said they were minimally or not at all prepared for the risks surrounding generative AI.
Innovation Guideposts
Legal leaders feeling unprepared to face this wide array of risks can consider the EU AI Act and other regulations as guideposts for innovating while avoiding undue risk. Initial, foundational guidance from the European Commission aims to address areas such as the definition of AI and prohibited AI practices.
The guidance complements the EU AI Act provisions prohibiting certain AI systems and mandating AI literacy requirements within organizations introducing new forms of AI. Additional generative AI, governance, and confidentiality provisions will apply starting in August 2025 alongside the penalty provisions of the act.
When organizations understand how regulators define problematic practices, how they expect companies to prevent them, and what constitutes harm in AI applications, they can prepare for the top-ranked concern areas such as bias and ethics. This allows organizations to advance technology with a set of checks and balances that reduce the potential for missteps.
Product-Level Regulation
The EU AI Act and other nascent AI laws also introduce product-level regulation into systems that are considered to pose unacceptable or high risk to individuals and society.
High-risk uses are covered by stringent requirements, such as activity logging to ensure results can be traced, robust risk assessment and mitigation processes, detailed documentation of all activities, a high level of cybersecurity and accuracy, and more. These are important foundational elements that help organizations incorporate best practices within their product innovation activities.
The European Commission has indicated most AI systems in use fall into limited, minimal, or no-risk categories. The latter two have no restrictions, while limited-risk systems must meet certain transparency requirements. These include disclosure obligations to ensure humans are informed when interacting with AI and to ensure clear labeling of AI-generated content.
Following these guidelines can support organizations’ concerns about transparency and explainability, which were among the top issues raised in The General Counsel Report.
The EU AI Act is closely aligned to the General Data Protection Regulation, providing a directive for upholding data privacy standards in AI systems. While this may appear as a complicating factor in an organization’s data privacy program, it actually allows organizations to understand how to prevent misuses of personal information or data privacy violations within their AI initiatives.
Generative AI has spurred more speculation and experimentation for the legal field than any other technology in decades, with opportunities to create efficiencies and transform processes, but it must align with risk management. With a balanced approach, organizations can use compliance to enable innovation, and facilitate proactive, sustainable approaches to concerns that work in tandem with technology design and experimentation.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Sophie Ross, global CEO of FTI Technology, has more than 20 years of experience in global company management, strategic leadership, and operations for high growth organizations.
Write for Us: Author Guidelines
To contact the editors responsible for this story: