Barton partner Kenneth Rashbaum says lawyers should minimize analogies and jargon when drafting prompts for generative AI models—similar to how litigators should question live witnesses.
Albert Einstein is attributed with once saying that, “If you can’t explain it simply, you don’t understand it well enough.” The adage especially applies to generative artificial intelligence models.
If you don’t understand a topic sufficiently to phrase a simple question for it, how can you expect generative AI—which can be as literal as a 6-year-old—to answer your prompt effectively? The key is simplicity and a direct approach to language, which is how lawyers are trained to communicate.
Lawyers are taught early that the critical aspect of success in litigation is the question, not the answer. It’s not the dramatic summation or the eloquent oral argument but, instead, the well-phrased question (with an answer in mind) that steers the ship of the case.
Questions must be clear, direct, and crafted with a purpose. This gives rise to the advice often given to witnesses about to testify: “If you don’t understand the question the other lawyer asks, it’s not your fault. It means the question is bad and should be rephrased.”
The same concept applies to creating a prompt for a generative AI model. Generative AI has come a long way, but it can’t reliably intuit meaning from an obscure tangle of words. It can’t detect nuance and is often at sea if a question is posed without sufficient context—or with unnecessary words and jargon.
And like the lawyer whose question at deposition or trial isn’t initially understood by the witness, it may take a few attempts to craft a good AI inquiry.
But this is what lawyers, especially litigators, are trained to do. They can and should apply these well-honed skills to make their AI models more responsive, effective, and less prone to “hallucinations”—making up facts.
Generative AI is likelier to respond to a prompt with false information than admit it doesn’t know the answer. And a poorly drafted prompt does more damage than merely eliding the question; it trains the model in a deleterious way.
Case in point: Wired reporter Brian Barrett noted that querying Google’s AI Overviews with gibberish or nonsensical phrases or questions can result in output from the model that “sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority.” In one of his experiments, Barrett queried the model with the silly phrase, “never throw a poodle at a pig,” and was misinformed that the phrase derives from the Bible.
It’s important, then, to understand how to create queries that reduce the chances of incorrect or misleading output. Lawyers know how to do that, but many who are unfamiliar with generative AI or its potential benefits are unaware they can leverage those skills.
Like poor questions posed to live witnesses, ineffective prompts for AI models can fall into at least two categories: those that the recipient fails to understand because the jargon is confusing or because the nuance isn’t understood, or those that are so elliptical the recipient doesn’t know what to make of the inquiry.
One example: “Do you have the time?” The answer can be “yes” or the literal time of day.
Another example: “Weave together this year’s executive orders into a theme of how they will impact society.” Because these early generative AI models are so literal, the response may include information about fabrics or tapestries made on a loom. A clearer version of this question, and one that might elicit more useful information, could be, “What do the 2025 executive orders have in common?”
The art of drafting effective prompts should come naturally to lawyers who recall and apply their basic communications training and remember to be judicious in use of analogy, complexity, and jargon in prompts. They should use short words within the factual context of their requests.
But they may still have to try a few different questions to get the prompt into a form that will provide the answer they seek—similar to the art of questioning of humans.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Kenneth N. Rashbaum is partner at Barton focusing on privacy, cybersecurity, and e-discovery.
Write for Us: Author Guidelines
To contact the editors responsible for this story: