“That is not a real person,” said Jerome Dewald, in the understatement of the season, about an artificially generated attorney that he presented at trial as his counsel. If you thought representing yourself in court was foolish, wait until you try having Chat GPT represent you.
Desperate plaintiffs aren’t the only ones using AI in the legal proceedings. There have already been many publicized instances of attorneys finding out that using AI in their work might not be worth the time it perceivably saves. Recently, an Indiana attorney had used generative AI, ChatGPT specifically, to file briefs. The problem is that those briefs contained fictitious cases, or “hallucinations,” as it’s known. As a result, he was fined $15,000 for his actions. In another high-profile example, Trump-Affiliate Mike Lindell’s attorneys were caught using AI in their filings. These, and many other incidents, may reek of incompetence (and/or outright laziness), but are they illegal?
The short answer is that, while technically not illegal, that doesn’t give attorneys a green light to use AI for all types of work product. Federal Rule of Civil Procedure 11 states that when an attorney submits a document to the court, they are certifying that the legal claims are valid or based on a good-faith argument for change, and the factual assertions have evidentiary support. While a violation of FRCP Rule 11 does not constitute a criminal offense, it can result in civil sanctions, and jeopardize the client’s case.
The situation is nuanced and evolving (for the worse.) Let’s get into it.
Before we do, it is worth mentioning that while there are certainly ethical considerations regarding using AI for legal work product, there are plenty of business development and marketing initiatives where attorneys can use AI to gain an edge. Get in touch for a consultation regarding law firm SEO services, intake system design, Google Ads campaigns, or website development. We are an industry leader in the tech-enabled legal marketing space, and we would be happy to speak with you.
Knowing Your Product
Over the past five years or so, ‘AI’ has become something of a catch-all term to mean any kind of technology that can ‘reason’ within a language or research model and use this reasoning to generate templates, code and documents for you based on input that you feed it. There are different kinds of AI that are currently being used for specialized purposes. Some are completely benign – think of the irritating paper clip helper in the early days of Microsoft Word. Others are impressive in their ability to save time and automate tasks – for example, there exists AI that can automate your tickler system and automate client interaction schedules.
There is also real money being pumped into AI-assisted SAAS systems that attorneys can use in their everyday workflow. If you scan Signal.nfx’s list of the most active investment sectors, AI, Generative AI, and LegalTech are three of the most active sectors where angel, seed, and series A/B investments are being made.
According to Crunchbase, since 2024, approximately 79% of all funding for legal-focused technology companies (amounting to nearly $2.2 billion) has been directed toward companies that are working or building AI-related categories. This includes major recipients like Clio and Harvey, which have led the way in publicly available fundraising figures. If you use Clio, you have probably either received emails about this or are already using some AI-enabled add-on to their software.
For work flows related to communication, marketing, or internal case management, AI-assisted tech can be a game changer. When it comes to actual work product or research that is filed with the court, it’s a completely different story – and some firms are becoming too comfortable crossing that line into the realm where outcomes for their clients are being affected. And it’s not just small over-worked firms that are being caught – AMLaw100 firms have been caught using AI in Court filings as well.
When it’s Not Okay to Use AI
While no laws have passed outright banning the use of the controversial tech in legal work, there has been a formal opinion from the American Bar Association explaining that great care must be taken to ensure ethics are being followed, due diligence is being done, and clients are being billed fairly and appropriately.
“Lawyers using GAI [generative artificial intelligence] tools have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature of GAI.
“In using GAI tools, lawyers also have other relevant ethical duties, such as those relating to
confidentiality, communication with a client, meritorious claims and contentions, candor toward
the tribunal, supervisory responsibilities regarding others in the law office using the technology
and those outside the law office providing GAI services, and charging reasonable fees.
“With the ever-evolving use of technology by lawyers and courts, lawyers must be vigilant in complying with the Rules of Professional Conduct to ensure that lawyers are adhering to their ethical responsibilities and that clients are protected.”
This language seems to liken the use of AI similarly to how an attorney might use a standard Google search – which does not play when it comes to case law research.
Reputational Impact
An attorney’s first time being caught using AI in a court-file brief is bad. The most egregious example of a single firm (the self-proclaimed ‘largest Plaintiff firm in the country at that) that has been caught using AI-hallucinated case law currently rests at three instances – and it was in a single case! The fines have so far been small, but it’s probably safe to say that it’s a bad idea to use generative AI in briefs that are filed with a court.
Ethical Considerations Surrounding Billing and Transparency
Ethical considerations when using AI for work product would be similar to standard ethical behavior related to accurate billing and transparency. Does the client know that you are using AI in their case? Is the client being billed appropriately for the time that an attorney saves by using AI? Obviously both of these scenarios can be dealt with appropriately, but it seems that some firms are not handling these considerations properly.
States are quickly adding their own rules when it comes to the use of AI in work product and/or filings. In California, for instance, “Lawyers can use AI technology and charge for time spent creating documentation and reviewing outputs.” While in New Jersey, “Lawyers must uphold diligence, honesty, client advocacy, and confidentiality when using AI.” New Jersey lawyers are also not required to disclose whether or not they used AI. That lack of requirement, however, does not give attorneys permission to use fake or misleading content.
Judges Are Looking to Make Examples
A $15,000 fine for using ChatGPT is not notable in terms of the dollar amount involved. The judge that throws that type of fine is a likely a judge that wants to make it clear that they not happy. Judges have also gone on to make their thoughts on the matter public – for example, a judge in a case named Central Operating Engineers Health and Welfare Fund v. Hoosiervac LLC, issued this statement when he found that the attorney for the Plaintiff (Mr. Ramirez) had incorrectly used AI to cite made up cases:
“Transposing numbers in a citation, getting the date wrong, or misspelling a party’s name is an error. Citing a case that simply does not exist is something else altogether.
“Mr. Ramirez offers no hint of an explanation for how a case citation made up out of whole cloth ended up in his brief. The most obvious explanation is that Mr Ramirez used an AI-generative tool to aid in drafting his brief and failed to check the citations therein before filing it.”
The judge went on to say that “failure to comply with that most basic of requirements” made Ramirez’s actions “particularly sanctionable.” Yikes.
Other judges in previous ChatGPT incidents with attorneys haven’t exactly been lenient either. Just two years ago, a New York court fined two lawyers and their firm $5,000 for submitting fake cases hallucinated by ChatGPT. In a personal injury case against Colombian airline Avianca, the lead attorney admitted to using ChatGPT to do legal research and claimed it was his first time using the popular platform and that he was unaware it could completely fabricate court cases.
ChatGPT isn’t the only AI that has been cited. Google’s AI, named Gemini, has also ensnared some less-than-diligent attorneys. Gemini, like ChatGPT, is also capable of hallucinating court cases. As seen in this recent filing involving the court battle between Mike Missanelli and JAKIB Media’s Joe Krause. In this case, Krause used Google Gemini to do his research and took Gemini’s summaries at face value. Unfortunately, a lack of due diligence led Krause to be admonished by the court and he narrowly avoided sanctions.
Takeaways and the Evolution of AI in Legal Research
All that to say, attorneys should keep a close eye on how the rules develop in their own state. In the meantime, be very careful. Regardless of state rules specific to AI, attorneys generally have a duty of conduct to the court and their client. Checking for accuracy from AI–assisted research seems like a common sense precaution, and yet, we’ve now seen dozens of notable examples where attorneys are not taking the secondary step to exercise their due diligence.
The bottom line is that ignorance doesn’t get anyone very far in court. It doesn’t work for a defendant and it won’t work for counsel.
When it comes to technology that is in its “wild west” era, it’s probably best to steer clear.