In meetings nationwide, companies are using artificial intelligence notetaking—technology that records, transcribes, and organizes business meeting conversations. AI notetaking can add efficiency, improve stakeholder focus, and improve accessibility for participants with different abilities or who speak different languages.
But the law governing these “notes” is lagging. Companies face a patchwork of state and federal laws, evolving court decisions, and real uncertainty. This backdrop raises questions about how a beneficial technology could be viewed as wiretapping. Uncertainty about how AI notetaking will influence discovery also adds a layer of risk.
Notetaking as Wiretapping?
To see this uncertain legal landscape in action, consider In re Otter.ai Privacy Litigation, a case pending in the US District Court of the Northern District of California.
Class action plaintiffs say that Otter.ai, an AI-powered transcription services provider, violated California’s wiretapping statute. The plaintiffs, who attended meetings but aren’t Otter.ai accountholders, allege that the company’s software slips “surreptitiously” into and records conversations without the participants’ knowledge or consent.
Otter.ai’s answer is expected to hit the docket early this year, but the wiretapping question could take years to resolve if the district court’s ruling is appealed. Until then, the safest course for continued notetaker use is to disclose its deployment and secure affirmative consent from all participants.
In re Otter.ai fits within the broader framework for understanding how AI notetaking might be wiretapping.
Consent rules shape the wiretapping landscape. Wiretapping’s legality hinges on a jurisdiction’s consent regime. All-party jurisdictions require every participant’s consent before recording, while one-party states allow recording with consent from just one participant.
The all-party camp includes California, Florida, Illinois, Maryland, Massachusetts, Montana, New Hampshire, Pennsylvania, and Washington. Most others accept one participant’s consent. These jurisdictions include Texas, New York, and Washington, DC. The same is true under federal law, but it won’t displace stricter state standards.
Vendor classification is complex. California’s Invasion of Privacy Act is among the country’s strictest wiretapping statutes. Within that scheme, vendor classification is a key unsettled issue.
Some courts treat software-as-a-service or AI vendors as tools used by parties to conversations, like tape recorders. Other courts equate vendors that retain rights or abilities to access and use meeting data (even if not exercised) with third‑party eavesdroppers. The Otter.ai plaintiffs share the eavesdropper view, alleging that Otter.ai not only spies but also benefits from eavesdropping, unknowingly to non-accountholders.
The complexity doesn’t stop there. A vendor with data-mining rights can expose a customer company to vicarious liability for the vendor’s CIPA violations.
Which law governs interstate questions is unclear. Whether for cross-office or remote work, many employees join online meetings from across the nation. This reality makes the unsettled and inconsistent directives on which jurisdiction’s law applies more concerning.
Outcomes vary because states use different analytical methods to address the question. Some courts use various conflicts methods, while others sidestep conflicts altogether by treating the issue as evidentiary. Factual variations add more uncertainty.
For many states, though, the place of interception controls. But states such as California have applied all-party consent requirements to out-of-state recordings when those recordings disadvantage Californians. The resulting hodgepodge makes all-party consent attractive.
Notetaking and Discovery
AI notetaking has created an entirely new category of documents in the discovery process, and their existence raises discovery and evidentiary questions.
From a discovery perspective, courts will need to determine who has possession, custody, and/or control of the notes to decide what must be produced (particularly when vendors have access to or retain data).
Work‑product protection may apply to meeting recordings or summaries prepared in anticipation of litigation, but that protection is limited and fact‑specific.
Given the prevalence of AI notetaking tools, it’s only a matter of time before evidentiary issues like authenticity and accuracy come before the courts. In 2024’s Dixon v. Royal Live Oaks Academy of the Arts, the US District Court for the District of South Carolina declined to consider auto-generated transcripts at summary judgment because the transcripts contained “errors.”
Attorney ethics are unclear. Undisclosed recordings aren’t unethical per se if they’re legal and not deceitful. But because lawyers can’t ethically do what’s illegal, undisclosed recording remains unlawful and unethical in all-party states.
Yet even when allowed, using AI notetaking may still draw court ire and skepticism. Some courts, emphasizing candor, have compelled disclosure or imposed sanctions, even when ethics opinions didn’t bar recording.
Practical Safeguards
Until courts or legislatures answer key questions, two stopgaps stand out.
The first is requiring all‑party consent and providing explicit disclosures whenever AI takes notes. That could include verbal scripts, pre‑meeting calendar language, on‑screen banners, and roll‑call consent, with procedures for late joiners, objections, and revocations. The Otter.ai plaintiffs applaud similar steps from other AI notetakers.
The second is a plan for use and access rights in vendor contracts. The choice to maintain those rights with an AI vendor increases the chances of potential liability and future discovery obligations.
Those steps help capture the benefits of AI notetakers while avoiding the sharp edges of wiretap and ethics risks.
As AI innovation advances, so does the possibility that even the most helpful technology and the law may collide. While lawyers, judges, and legislators do the hard work of reconciling these issues, disclosure and caution are the best interim fixes.
The case is In re Otter.AI Privacy Litigation, N.D. Cal., No. 25-cv-06911, stipulation granted 12/22/25.
An immaterial amount of this content was drafted by generative artificial intelligence.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Shayna Goldblatt Proler is counsel with Haynes & Boone.
Tyler P. Young is a partner with Yetter Coleman in Houston focusing on commercial litigation and antitrust.
Write for Us: Author Guidelines
To contact the editors responsible for this story: