Since Mata v. Avianca in 2023, over 1,000 sanctions and disciplinary cases have been logged involving attorneys who submitted AI-generated filings with fake citations. The question in 2026 is no longer "should I use AI for legal writing?" — it's "how do I use it without losing my license?" This guide walks through the responsible workflow, the ABA Formal Opinion 512 requirements, and the single architectural principle that makes AI legal writing safe: every citation must verify against a real database before it reaches a judge.
Paste your document — Ark verifies every case citation against 7.4 million court opinions in seconds. Find hallucinated cites before a judge does.
Run a Free Citation CheckIn June 2023, a federal judge in the Southern District of New York sanctioned two attorneys and their law firm for submitting a brief in Mata v. Avianca that cited six judicial opinions that did not exist. The attorneys had used ChatGPT to research the brief. The cases were hallucinated. When opposing counsel couldn't find them, ChatGPT confidently provided more fake quotations from the fake cases. The attorneys were fined $5,000, required to notify the judges they had falsely cited, and became the first case study in every legal ethics CLE for the next three years.
Since then, Damien Charlotin's public AI hallucination database has catalogued over 1,000 additional cases. Not all resulted in sanctions, but many did. The pattern is consistent: an attorney uses a general-purpose language model (ChatGPT, Claude, Gemini, Copilot), the model invents plausible-sounding citations, and the attorney — trusting the tool or too rushed to check — files the document.
The frustrating part is that the attorneys involved are often competent lawyers. They understand precedent. They wrote good briefs for twenty years before AI existed. What changed is that they started using a tool that is designed to produce plausible-sounding text, not true text. Large language models predict tokens. They do not retrieve facts. When they return a citation, there is no guarantee the citation exists — the tool simply generated text that looks like a citation.
Every AI legal-writing sanction case falls into one of two categories:
The AI invents a case, statute, or regulation that does not exist. Park v. Kim, Varghese v. China Southern Airlines, Shaboon v. Egyptair — all fabricated in the Avianca filings. The AI generated plausible names, plausible citations, plausible pin cites, even plausible quotations. None of them existed.
The AI cites a real case — but the case does not stand for the proposition the AI claims. This is harder to catch because the case is real, the citation is formatted correctly, and a cursory Shepard check will show the case is still good law. The failure is substantive: the cited case says something different from what the AI says it says, or the AI attributes language to the case that doesn't appear in the opinion. Misgrounded authority is becoming the more common failure mode as tools get better at producing real citations.
The Stanford Human-Centered AI institute benchmarked the leading legal AI tools on hallucination rates. General-purpose tools (ChatGPT, etc.) hallucinated on 17 to 33 percent of legal queries. Even specialized legal AI tools hallucinated 10 to 17 percent of the time. Any AI-assisted brief that leaves your office unchecked is a sanctions risk.
In July 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 — the first comprehensive guidance on generative AI in legal practice. State bars quickly followed with their own opinions (Florida, California, New York, North Carolina, and others). The substantive requirements cluster around four Model Rules:
You must understand the benefits and risks of the technology you use. "I didn't know it could make up cases" is not a defense. Competence now includes knowing that language models hallucinate, understanding which tools verify against real databases, and maintaining proficiency as tools change.
Client information fed into an AI tool remains protected. Most free AI tools (ChatGPT free tier, Gemini, Copilot) train on user input by default. Feeding a client's confidential facts into a tool that trains on input can violate the duty of confidentiality. Use tools with explicit contractual commitments that prohibit training on user data.
You are responsible for every assertion in a filing, including citations. The tool did it is not a defense. If a citation is false, you filed a false statement to the court, regardless of whether you or the AI generated the text. The duty to verify rests on the signing attorney.
AI is treated as a nonlawyer assistant. You have a duty to supervise its output the same way you would supervise a paralegal's draft — which means review, verification, and approval before submission.
After dozens of ethics opinions, sanctions cases, and firm-level policies, a standard workflow has emerged. The lawyers who use AI heavily and avoid sanctions follow some version of this six-step pattern:
AI is best at discrete, bounded writing tasks: first drafts, summaries, reorganization, rephrasing, outline generation. It is worst at open-ended research where it has to both find authority and synthesize it. Use AI for the first; do not trust it with the second.
Rather than asking the AI to "find cases about X," feed it cases you already know are good authority. Retrieved-first, drafted-second. This is the single biggest reduction in hallucination risk. If the AI is rewriting or summarizing text you provided, it cannot invent a case.
Let the AI produce a draft. Then, before you do anything else, run every citation through a verification tool. Ark's free citation checker validates against 7.4 million opinions. Dedicated tools like BriefCatch, Clearbrief, CiteGuard, and CiteCheck AI perform similar functions. No exceptions, no shortcuts. Citations verify or they come out.
For every sentence that states a rule of law, read the cited authority to confirm it actually stands for that proposition. Misgrounded authority is not caught by existence checks. Only substantive reading catches it. This is tedious — and it is the point at which AI still saves massive time despite the verification overhead.
AI drafts are competent but generic. The distinguishing facts of your case, the specific judge's preferences, local rules, the opposing counsel's likely reply — these come from you. The AI can give you 70 percent; the remaining 30 percent is where you earn your fee.
Keep a record of what tool you used, what prompts you ran, and what verification steps you performed. If opposing counsel or the court later questions a citation, your verification log is your defense.
The AI legal-writing market in 2026 includes a dozen serious tools. They fall into three categories, and the category matters far more than the specific brand.
Useful for drafting, paraphrasing, and summarizing text you provide. Not safe for research or citation generation. Hallucination rates are the highest. Free tiers often train on input, raising confidentiality concerns. Use with extreme care and only with content you verify independently.
Specialized for editing, polishing, cite-checking, and formatting. Good at suggesting clearer sentences, stronger openings, cleaner citations. Do not generate substantive legal content from scratch. Low hallucination risk because they do not invent authority.
Built on top of a proprietary legal database. Every citation the tool produces is resolved against real primary sources — if the case does not exist in the database, the tool cannot cite it. This architectural constraint is the single most effective defense against hallucination. Ark goes a step further: every proposition of law the AI makes is cross-checked against the salient-points extraction of the cited authority, flagging misgrounded as well as fabricated citations.
Ark's legal AI reads your matter, drafts motions and memos from real authority, and verifies every citation automatically. Sign up for a free trial and run it against your next brief.
Try Ark FreeYou can use it for drafting, paraphrasing, and organizing text you provide. You cannot safely use it to find authority or generate citations — the hallucination rate is too high and the sanctions risk real. If you use a general-purpose LLM, treat every citation and every legal proposition as unverified until you confirm it against a primary source.
A citation that looks correct (plausible case name, realistic reporter, proper format) but does not correspond to a real case. Language models generate plausible-sounding text by design; when they lack real retrieval, they produce citations that pattern-match to legal formatting without referencing any actual opinion. The term also covers real cases cited for propositions they do not actually stand for (misgrounded authority).
ABA Formal Opinion 512 stops short of a blanket disclosure requirement but notes that disclosure may be required when AI use is significant, affects fees, or involves client information. State bar opinions vary — Florida requires disclosure in some contexts; California suggests it as best practice. Safer practice is to disclose AI use in engagement letters and discuss specifics with clients when AI meaningfully shapes work product.
Yes. The contract's enforceability depends on its content, not how it was drafted. But the duty of competence still applies: if you signed off on an AI-drafted contract without reviewing it, and the contract contains unenforceable or malpractice-level errors, you are responsible. AI drafts should always be reviewed as if a new associate drafted them — carefully and with known attention to patterns AI gets wrong.
The safest tools share three architectural properties: (1) retrieval-grounded — every citation resolved against a real legal database, not generated from patterns; (2) verified — propositions cross-checked against the cited authority; (3) confidential — contractual prohibition on training with user data. Tools like Ark, Casetext CoCounsel, Harvey, and LegesGPT are built around this model. General-purpose LLMs are not.
For a standard 20-page brief with 15-25 citations: about 60 to 90 minutes with a verification tool, versus 10-15 hours writing the same brief from scratch. The net time savings still favors AI even accounting for verification — which is why the workflow has caught on despite the ethics overhead. Tools that verify automatically during drafting compress this further, often to under 30 minutes.
Run your next brief through Ark's free citation checker before filing. Every citation verified against 7.4 million real opinions. No signup required for the first check.
Check My Brief FreeDisclaimer: This is general information about legal-writing practice and the current professional-responsibility landscape. Not legal advice. Consult your jurisdiction's specific rules of professional conduct and any controlling state bar opinions before relying on AI in filed documents.
Related resources for lawyers: Citation verification — how Ark prevents hallucinations • Ark vs ChatGPT for legal research • AI legal research for solo attorneys • Free citation checker