Lawyer's Practical Guide

AI Legal Writing: Draft Briefs & Motions Without Getting Sanctioned

By Ark Legal AI ~11 min read

Since Mata v. Avianca in 2023, over 1,000 sanctions and disciplinary cases have been logged involving attorneys who submitted AI-generated filings with fake citations. The question in 2026 is no longer "should I use AI for legal writing?" — it's "how do I use it without losing my license?" This guide walks through the responsible workflow, the ABA Formal Opinion 512 requirements, and the single architectural principle that makes AI legal writing safe: every citation must verify against a real database before it reaches a judge.

Check Any Brief's Citations for Free

Paste your document — Ark verifies every case citation against 7.4 million court opinions in seconds. Find hallucinated cites before a judge does.

Run a Free Citation Check

Why Attorneys Keep Getting Sanctioned

In June 2023, a federal judge in the Southern District of New York sanctioned two attorneys and their law firm for submitting a brief in Mata v. Avianca that cited six judicial opinions that did not exist. The attorneys had used ChatGPT to research the brief. The cases were hallucinated. When opposing counsel couldn't find them, ChatGPT confidently provided more fake quotations from the fake cases. The attorneys were fined $5,000, required to notify the judges they had falsely cited, and became the first case study in every legal ethics CLE for the next three years.

Since then, Damien Charlotin's public AI hallucination database has catalogued over 1,000 additional cases. Not all resulted in sanctions, but many did. The pattern is consistent: an attorney uses a general-purpose language model (ChatGPT, Claude, Gemini, Copilot), the model invents plausible-sounding citations, and the attorney — trusting the tool or too rushed to check — files the document.

The frustrating part is that the attorneys involved are often competent lawyers. They understand precedent. They wrote good briefs for twenty years before AI existed. What changed is that they started using a tool that is designed to produce plausible-sounding text, not true text. Large language models predict tokens. They do not retrieve facts. When they return a citation, there is no guarantee the citation exists — the tool simply generated text that looks like a citation.

"Plausibility is not truth. The core risk of general-purpose AI in legal writing is that the output is designed to read well, not to be accurate."

The Two Failure Modes of AI Legal Writing

Every AI legal-writing sanction case falls into one of two categories:

1. Hallucinated Authority

The AI invents a case, statute, or regulation that does not exist. Park v. Kim, Varghese v. China Southern Airlines, Shaboon v. Egyptair — all fabricated in the Avianca filings. The AI generated plausible names, plausible citations, plausible pin cites, even plausible quotations. None of them existed.

2. Misgrounded Authority

The AI cites a real case — but the case does not stand for the proposition the AI claims. This is harder to catch because the case is real, the citation is formatted correctly, and a cursory Shepard check will show the case is still good law. The failure is substantive: the cited case says something different from what the AI says it says, or the AI attributes language to the case that doesn't appear in the opinion. Misgrounded authority is becoming the more common failure mode as tools get better at producing real citations.

The Stanford Human-Centered AI institute benchmarked the leading legal AI tools on hallucination rates. General-purpose tools (ChatGPT, etc.) hallucinated on 17 to 33 percent of legal queries. Even specialized legal AI tools hallucinated 10 to 17 percent of the time. Any AI-assisted brief that leaves your office unchecked is a sanctions risk.

What ABA Formal Opinion 512 Actually Requires

In July 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 — the first comprehensive guidance on generative AI in legal practice. State bars quickly followed with their own opinions (Florida, California, New York, North Carolina, and others). The substantive requirements cluster around four Model Rules:

Rule 1.1 — Competence

You must understand the benefits and risks of the technology you use. "I didn't know it could make up cases" is not a defense. Competence now includes knowing that language models hallucinate, understanding which tools verify against real databases, and maintaining proficiency as tools change.

Rule 1.6 — Confidentiality

Client information fed into an AI tool remains protected. Most free AI tools (ChatGPT free tier, Gemini, Copilot) train on user input by default. Feeding a client's confidential facts into a tool that trains on input can violate the duty of confidentiality. Use tools with explicit contractual commitments that prohibit training on user data.

Rule 3.3 — Candor to the Tribunal

You are responsible for every assertion in a filing, including citations. The tool did it is not a defense. If a citation is false, you filed a false statement to the court, regardless of whether you or the AI generated the text. The duty to verify rests on the signing attorney.

Rule 5.3 — Responsibility for Nonlawyer Assistance

AI is treated as a nonlawyer assistant. You have a duty to supervise its output the same way you would supervise a paralegal's draft — which means review, verification, and approval before submission.

Bottom line: the rules don't prohibit AI use. They require you to verify every output before it leaves your office, and to use tools with confidentiality commitments. Workflow, not tool choice, is the ethical lever.

The Responsible AI Legal Writing Workflow

After dozens of ethics opinions, sanctions cases, and firm-level policies, a standard workflow has emerged. The lawyers who use AI heavily and avoid sanctions follow some version of this six-step pattern:

Step 1 — Isolate the Task

AI is best at discrete, bounded writing tasks: first drafts, summaries, reorganization, rephrasing, outline generation. It is worst at open-ended research where it has to both find authority and synthesize it. Use AI for the first; do not trust it with the second.

Step 2 — Provide Real Authority as Input

Rather than asking the AI to "find cases about X," feed it cases you already know are good authority. Retrieved-first, drafted-second. This is the single biggest reduction in hallucination risk. If the AI is rewriting or summarizing text you provided, it cannot invent a case.

Step 3 — Draft First, Verify Every Citation

Let the AI produce a draft. Then, before you do anything else, run every citation through a verification tool. Ark's free citation checker validates against 7.4 million opinions. Dedicated tools like BriefCatch, Clearbrief, CiteGuard, and CiteCheck AI perform similar functions. No exceptions, no shortcuts. Citations verify or they come out.

Step 4 — Verify Substantive Propositions

For every sentence that states a rule of law, read the cited authority to confirm it actually stands for that proposition. Misgrounded authority is not caught by existence checks. Only substantive reading catches it. This is tedious — and it is the point at which AI still saves massive time despite the verification overhead.

Step 5 — Apply Your Own Judgment

AI drafts are competent but generic. The distinguishing facts of your case, the specific judge's preferences, local rules, the opposing counsel's likely reply — these come from you. The AI can give you 70 percent; the remaining 30 percent is where you earn your fee.

Step 6 — Document Your Process

Keep a record of what tool you used, what prompts you ran, and what verification steps you performed. If opposing counsel or the court later questions a citation, your verification log is your defense.

How to Choose an AI Writing Tool That Won't Sanction You

The AI legal-writing market in 2026 includes a dozen serious tools. They fall into three categories, and the category matters far more than the specific brand.

General-Purpose LLMs (ChatGPT, Claude, Gemini, Copilot)

Useful for drafting, paraphrasing, and summarizing text you provide. Not safe for research or citation generation. Hallucination rates are the highest. Free tiers often train on input, raising confidentiality concerns. Use with extreme care and only with content you verify independently.

Editorial Tools (BriefCatch, Clearbrief, Typeshift)

Specialized for editing, polishing, cite-checking, and formatting. Good at suggesting clearer sentences, stronger openings, cleaner citations. Do not generate substantive legal content from scratch. Low hallucination risk because they do not invent authority.

Research-Grounded AI (Ark, Casetext CoCounsel, Harvey, LegesGPT)

Built on top of a proprietary legal database. Every citation the tool produces is resolved against real primary sources — if the case does not exist in the database, the tool cannot cite it. This architectural constraint is the single most effective defense against hallucination. Ark goes a step further: every proposition of law the AI makes is cross-checked against the salient-points extraction of the cited authority, flagging misgrounded as well as fabricated citations.

See Verification-First AI in Action

Ark's legal AI reads your matter, drafts motions and memos from real authority, and verifies every citation automatically. Sign up for a free trial and run it against your next brief.

Try Ark Free

What AI Drafts Well (and Where to Stay Manual)

Safe AI Territory

  • First-draft legal memos. Structured format, predictable sections, heavy paraphrasing of authority.
  • Discovery responses. Pattern work, standard objections, formulaic phrasing.
  • Routine motions. Motion to continue, motion to compel, stipulations — high-volume, procedural.
  • Deposition summaries and timelines. Organizing and summarizing facts you provide.
  • Demand letters and settlement letters. Persuasive writing from facts you supply.
  • Client-update emails and status reports. Plain-English explanation of complex proceedings.

High-Caution Territory

  • Appellate briefs. Where hallucinations are most likely to be caught and most severely sanctioned.
  • Novel legal arguments. AI defaults to conventional arguments; novel arguments are where authority is rarest and most likely to be invented.
  • Cross-jurisdictional research. Models often mix up jurisdictions or cite authority from the wrong state.
  • Ethics-adjacent filings. Any filing where your own competence may be scrutinized later.
  • Final, high-stakes documents. Signed settlement agreements, stipulated orders, consent decrees.

Pre-Filing Verification Checklist

Before Signing an AI-Assisted Filing

  • Every case citation verified to exist. Pull the opinion from a real database — Ark, Westlaw, Lexis, Google Scholar, or CourtListener. If you can't find it, it isn't real.
  • Every holding confirmed by reading the case. Not a headnote, not a summary — the opinion itself. Confirm the cited proposition actually appears.
  • Every quotation verified word-for-word. Paste the quote into the opinion document search. If the text isn't there, the quote is wrong.
  • Every statute and regulation pulled from primary source. Do not trust AI-reported statute text. Pull the current version from a government or primary source.
  • Every citation Shepardized or KeyCited. Confirm the authority is still good law. Superseded or overruled cases that AI hasn't updated are a subtler failure mode.
  • Local rules cross-checked. AI often defaults to federal procedure when your case is in state court. Verify every procedural rule you invoke.
  • Confidentiality preserved. Confirm the tool you used does not train on your input, and that client-identifying facts were handled appropriately.
  • Verification log saved. Note the tool, prompts, and verification steps. Keep for as long as the matter is live plus statute of limitations.

People Also Ask

Can I use ChatGPT to write a legal brief?

You can use it for drafting, paraphrasing, and organizing text you provide. You cannot safely use it to find authority or generate citations — the hallucination rate is too high and the sanctions risk real. If you use a general-purpose LLM, treat every citation and every legal proposition as unverified until you confirm it against a primary source.

What is a hallucinated legal citation?

A citation that looks correct (plausible case name, realistic reporter, proper format) but does not correspond to a real case. Language models generate plausible-sounding text by design; when they lack real retrieval, they produce citations that pattern-match to legal formatting without referencing any actual opinion. The term also covers real cases cited for propositions they do not actually stand for (misgrounded authority).

Does the ABA require lawyers to disclose AI use to clients?

ABA Formal Opinion 512 stops short of a blanket disclosure requirement but notes that disclosure may be required when AI use is significant, affects fees, or involves client information. State bar opinions vary — Florida requires disclosure in some contexts; California suggests it as best practice. Safer practice is to disclose AI use in engagement letters and discuss specifics with clients when AI meaningfully shapes work product.

Are AI-drafted contracts enforceable?

Yes. The contract's enforceability depends on its content, not how it was drafted. But the duty of competence still applies: if you signed off on an AI-drafted contract without reviewing it, and the contract contains unenforceable or malpractice-level errors, you are responsible. AI drafts should always be reviewed as if a new associate drafted them — carefully and with known attention to patterns AI gets wrong.

What's the safest AI tool for legal writing?

The safest tools share three architectural properties: (1) retrieval-grounded — every citation resolved against a real legal database, not generated from patterns; (2) verified — propositions cross-checked against the cited authority; (3) confidential — contractual prohibition on training with user data. Tools like Ark, Casetext CoCounsel, Harvey, and LegesGPT are built around this model. General-purpose LLMs are not.

How long does it take to verify an AI-drafted brief?

For a standard 20-page brief with 15-25 citations: about 60 to 90 minutes with a verification tool, versus 10-15 hours writing the same brief from scratch. The net time savings still favors AI even accounting for verification — which is why the workflow has caught on despite the ethics overhead. Tools that verify automatically during drafting compress this further, often to under 30 minutes.

The Single Safest Thing You Can Do Today

Run your next brief through Ark's free citation checker before filing. Every citation verified against 7.4 million real opinions. No signup required for the first check.

Check My Brief Free

Disclaimer: This is general information about legal-writing practice and the current professional-responsibility landscape. Not legal advice. Consult your jurisdiction's specific rules of professional conduct and any controlling state bar opinions before relying on AI in filed documents.

Related resources for lawyers: Citation verification — how Ark prevents hallucinationsArk vs ChatGPT for legal researchAI legal research for solo attorneysFree citation checker