A bad week to be a paralegal in a hurry. Sullivan & Cromwell — the white-shoe firm that advises OpenAI on its "safe and ethical" deployment — sent a federal bankruptcy judge an emergency letter apologising for an AI-hallucinated brief in the Prince Global Holdings WTF case. Across the country, the Sixth Circuit fined two appellate lawyers $15,000 each for fake citations, and a San Diego attorney absorbed $110,000 — the largest AI-hallucination penalty on record.
Yet on the same days, Thomson Reuters' CoCounsel crossed a million paying users and teased an "agentic" successor built on the Claude Agent SDK; JusticeText rolled its body-camera review tool into seven of the country's 22 statewide public defender offices; and the Innocence Discovery Lab published peer-reviewed work on extracting officer and prosecutor identities from 300,000 pages of exoneration files. The legal-AI stack is splitting into two systems: a curated, audited tier for the well-resourced, and a Wild West for the rest — most painfully for pro se litigants, who account for roughly 59% of every hallucinated filing in the global database.
The biggest legal-AI developments of the week.
🍼 Famous lawyers used a robot, robot lied, lawyers begged judge not to be mad.
The most embarrassing AI fumble of the year landed on a federal docket on 18 April. Andrew Dietderich, a partner at Sullivan & Cromwell, wrote Chief Bankruptcy Judge Martin Glenn of the Southern District of New York to confess that an emergency motion in the Chapter 15 case of Prince Global Holdings — the British Virgin Islands shell of a Cambodian forced-labour scam network — contained fabricated citations and legal errors generated by AI. Opposing counsel Boies Schiller Flexner caught the mistakes first.
The firm's response read like a disaster-recovery playbook: comprehensive AI policies, mandatory training, supervisory review — none of which were followed in this filing. The dark joke, of course, is that Sullivan & Cromwell advises OpenAI on the "safe and ethical deployment" of the very technology that just sandbagged it. CNN covered the story under the headline "another hallucinated court filing highlights the difference between Silicon Valley and the rest of the world."
What this story is actually about: the failure mode of AI in law is not the rookie solo practitioner. It is the well-trained associate at the highest-billing firm in America, working under a deadline, trusting the tool past the point at which it deserves trust. Sanctions follow next week.
Sources: Bloomberg Law · Above the Law · David Lat
🍼 Judges done being polite. Bring lawyer money.
Two cases in two months turned the warnings into law. On 13 March, the Sixth Circuit handed down Whiting v. City of Athens, sanctioning Tennessee attorneys Van Irion and Russ Egli $15,000 each — plus the appellees' fees and double court costs — for "over two dozen fake citations and misrepresentations" in consolidated appellate briefs. The opinion, importantly, declined to single out AI; it ruled that any unverified citation, "however generated," is a Rule 11 violation. The framing matters: the court refused to make AI a special category of misconduct, and instead made the lawyer's duty to verify the only category that exists.
Six weeks later, a magistrate judge in California imposed $110,000 on San Diego attorney Stephen Brigandi — $96,000 in direct sanctions plus fees and costs — the largest AI-hallucination penalty in U.S. history. Q1 2026 totals reached $145,000 across U.S. courts. NPR covered the trend on 3 April.
Meanwhile, the underlying database keeps swelling. Damien Charlotin's tracker — the canonical academic reference, hosted at his personal site — counts 1,227 documented filings worldwide and adds five to six daily. The most uncomfortable statistic: 59% of those filings come from pro se WTF litigants who turned to AI because they could not afford counsel.
Sources: LawSites · PlatinumIDS · NPR · Noah News
🍼 Big legal robot now plans its own homework instead of waiting to be told.
Thomson Reuters announced on 25 February that CoCounsel — the assistant it acquired with Casetext in 2023 — had passed a million paid users in 107 countries. On 23 April it went further, opening the beta of a "fiduciary-grade" agentic WTF successor built on Anthropic's Claude Agent SDK.
The pitch: a senior associate, not a first-year. Lawyers describe an objective; CoCounsel plans the work, retrieves authority from Westlaw and Practical Law, drafts, verifies citations remain in good law, and adapts mid-task. The framework, the company says, scores not just answers but the reasoning chain that produced them — an explicit response to the hallucination crisis chronicled above.
Why this matters for justice work: Miami-Dade Public Defender's Office became the first PD office in America to issue 100 attorneys CoCounsel licences. The platform that started as transactional research is now sitting beside indigent defenders generating cross-examination questions and motion drafts. Whether the agentic version — which retrieves, plans, and verifies in a closed loop — actually solves the hallucination problem or merely hides it deeper in the workflow is the question of the next twelve months.
Sources: LawSites · Artificial Lawyer · PR Newswire
Quick hits: launches, money, court orders, and consolidation moves.
🍼 Big legal-AI fish gets $40M to eat smaller legal-AI fish.
Canadian contract-AI vendor Spellbook took $40M in debt financing from RBCx — the Royal Bank of Canada's innovation arm — to acquire smaller competitors. The company tripled revenue in 2025 and is on track for $100M annual recurring revenue (the steady-state subscription revenue venture investors actually care about). The market is consolidating; the long tail of legal-AI startups is being pulled in.
🍼 Suing-people robot is now worth a billion dollars.
Eve raised $103M led by Spark Capital, eight months after its Series A. The platform serves plaintiff law firms — employment, personal injury, mass tort — and has signed 350+ new firms since January 2025, total 450+. Notable because most legal-AI capital has flowed to defence-side and BigLaw; Eve is the first nine-figure plaintiff-side bet.
🍼 The federal courts are writing rules for robot evidence.
The Judicial Conference's draft Rule 707 — which governs evidence the proponent acknowledges was created by AI — closed public comment on 16 February. It does not address evidence whose authenticity is disputed (the deepfake problem). That gap is the next fight: how the rules of evidence handle AI-fabricated content offered as authentic.
🍼 Public defenders got the same fancy robot the rich firms use.
The Miami-Dade Public Defender's Office — 100 attorneys — issued CoCounsel licences across the office. Reported uses: cross-examination prep, motion drafting, multi-jurisdictional surveys. UC Berkeley Law is maintaining a public catalogue of every AI tool deployed in indigent defence, which is now the most useful single document on this topic.
🍼 Body-cam-reading robot quietly took over a third of the country's defenders.
JusticeText's annual recurring revenue reached $4M as of late 2025. The platform transcribes body-worn camera footage, jail calls, 911 audio, and interrogation video, and auto-flags moments — Miranda warnings, arrest, requests for counsel — that defenders need to find. New 2026 partnerships: Montana, Kansas, Iowa. Renewals: Tennessee, Massachusetts. Kentucky DPA reports "hours per case" saved.
🍼 Pro se filer asked ChatGPT for legal help; lawsuit says ChatGPT is now an unlicensed lawyer.
A complaint filed in 2026 alleges that ChatGPT engaged in unauthorised practice of law by drafting filings for a pro se litigant. It is reportedly the first time an AI vendor has been directly named on UPL grounds. The doctrinal question — does generative output to a non-lawyer constitute "practising law"? — has been theoretical for two years. It is now in front of a court.
🍼 Pasting client secrets into a robot might count as telling everyone.
A federal court ruled that documents produced via AI tools — especially third-party hosted models — are not protected by attorney-client privilege when privileged information is fed into them. The implication: enterprise legal-AI vendors that don't sign business associate agreements or run on-premises models are now creating privilege-waiver risk on every query.
🍼 Big legal robot now speaks Australian and Indian law.
Harvey's April update consolidated Australian and Indian legal data sources, opened Microsoft Word in-document editing, and launched Harvey Academy. Stinson — a 500-lawyer U.S. firm — went firmwide. HSBC deployed Harvey across its in-house legal platform. The international and corporate-legal markets are where the next $500M of revenue lives.
Original reporting from academic, practitioner, and innocence-org sources. (Reddit fetches were blocked this run; this section leans on direct primary sources instead.)
🍼 "If you cite it, link to it" — the simplest fix anyone has proposed.
The same idea is appearing independently in three places: National Law Review editorial, Eve.legal commentary, and Spellbook's compliance brief. The argument is mechanical: require every citation in a court filing to include a working hyperlink to a verifiable source (Westlaw, free-law repository, court docket). Hallucinated cases cannot be hyperlinked, because they do not exist. The rule is forcing-function rather than punishment.
Why convergence matters here: this is not a vendor pushing a feature. It is three unconnected commentators arriving at the same procedural reform within four weeks. That pattern usually precedes a circuit-level standing order.
🍼 People too poor for lawyers used the cheap robot. Robot lied. They got sanctioned. The end.
Five separate sources — PlatinumIDS, Georgetown Legal Ethics, Fisher Phillips, NPR, and NBC News — converged on the same observation: 59% of all sanctioned hallucination filings come from people representing themselves because they could not afford a lawyer. The technology marketed as "democratising legal access" is, on the worst-case side, generating fake cases that get poor litigants thrown out of court.
Pro se employment filings rose from 4,100 to 6,400 year over year — a 49% jump — driven in significant part by AI-assisted self-drafting. Some of those litigants are winning. Many are being sanctioned. The system has not figured out how to tell the difference at intake.
🍼 Researchers turned every wrongful-conviction file into a searchable database to find the dirty cops.
The most ambitious deployment in this space. The Innocence Discovery Lab — a collaboration between the Innocence Project New Orleans and the National Registry of Exonerations — used large language models to extract the names, actions, and case-roles of every police officer, prosecutor, and crime-lab analyst across 300,000 pages of documents from cases of the wrongfully convicted. The result is a queryable database of misconduct patterns: which detective, which prosecutor, which lab tech appeared across how many bad convictions.
This is the structurally interesting move in justice-side AI. Most legal-AI deals with one document or one case. This deals with the graph across cases — the across-case detection that no human team could ever have done by hand. Published in Wrongful Conviction Law Review, May 2024; tooling now in active production use.
🍼 A tiny innocence office got back the time it used to spend transcribing paperwork.
The Montana Innocence Project reported that Google's Pinpoint tool reduced post-conviction relief petition transcription from up to three days down to "almost instantly." Small-shop economics: every hour saved on paperwork is an hour spent on substantive investigation. The MTIP's internal claim is that AI-assisted workflow lets them take on more clients per year — the actual measure of impact in this work is exoneration count, and exonerations are constrained by lawyer-hours.
🍼 AI reads every police interview from a case at once and flags where the story changes.
The California Innocence Project uses generative AI to identify inconsistencies across multiple witness statements within a single case file — the classic across-time contradiction that defines reliable testimony from unreliable testimony. Reported in the ABA Journal. Specific tooling unspecified; workflow appears to be a custom build over a hosted model.
🍼 After the privilege ruling, "private robot in your office" became a real product category.
The federal-court privilege ruling cited above creates a new category of demand: legal-AI that runs on a firm's own infrastructure or under a contractually privileged relationship. Almost no current product satisfies this for small and mid-sized firms. CoCounsel and Harvey have enterprise data agreements; everyone below them is exposed. The market gap is obvious; the building has not started in earnest.
Adjacent signal: an arXiv paper titled "How Can AI Augment Access to Justice? Public Defenders' Perspectives on AI Adoption" (March 2026) reports that public defender offices identified data sovereignty — keeping client data off third-party clouds — as their second-highest concern after cost. The need is real and unaddressed.
🍼 Some of the "AI mistakes" might just be humans not reading what the robot wrote.
An Above the Law piece this week made the contrarian case: many "AI hallucination" sanctions look more like classic copy-paste failures by under-supervised humans, then blamed on the model after the fact. The point is well-aimed. The Sixth Circuit's "however generated" framing in Whiting implicitly accepts it: the duty is the lawyer's, regardless of which technology created the error.
🍼 Every fancy law firm is rewriting its AI rulebook this week.
The Sullivan & Cromwell story has triggered policy reviews across the AmLaw 100. The shared diagnosis: every major firm has a written AI policy; the failure mode is not policy absence, it is policy enforcement under deadline pressure. The next round of firm-level controls will likely include mandatory hyperlink-verification at the document-management-system level — i.e. a brief cannot be filed unless every citation resolves to a known database.
🍼 The new way for legal robots to brag: "trust me, I'm a fiduciary."
Thomson Reuters' use of "fiduciary-grade" in its CoCounsel Legal launch is the first major instance of a legal-AI vendor borrowing terminology from regulated finance. There is no fiduciary regulator for AI; the phrase is purely positional. Watch for it to spread (or to draw a regulator's attention) over the next two quarters.
🍼 The same people using AI to free the innocent are warning it could put more innocent people in prison.
The Innocence Project published a remarkable piece — "Wrongful Convictions Exposed Unvalidated Science — Are We Repeating With AI?" — drawing the parallel between bite-mark analysis, hair comparison, bullet-lead matching (all debunked forensic methods that put innocent people in prison) and AI-based facial recognition, predictive policing, and gunshot-detection systems being adopted across U.S. law enforcement. The position is precise: the organisation's exoneration work depends on AI document review, but the organisation's policy posture is hostile to AI used as evidence in convictions. Few institutions hold both positions coherently.
Philosophical and structural realisations from this week's signal.
🍼 The hard part of being a justice lawyer used to be reading; now it's knowing what to look for.
For most of legal history, appeals were rate-limited by human reading speed. A 50,000-page record could be understood in 30 seconds by a smart judge but required 80 hours to read. AI compresses reading time toward zero. What remains is the asymmetric knowledge of what to search for: the names that should not appear together, the timeline that should not contradict, the witness whose story should not have shifted. This is why innocence projects and defence-side users adopt AI faster than prosecutors. Defence digs through somebody else's record. Prosecution sets the record. Tools that compress reading time disproportionately benefit whoever is drowning.
🍼 The judge said: "I don't care if a robot wrote it. You signed it. You read it."
The Sixth Circuit's decision in Whiting declined to single out AI as a unique evil and ruled instead that any unverified citation, "however generated," violates Rule 11. This is the structurally correct move. AI is a tool; the duty of verification is the lawyer's. Carving out AI as a separate category would have created two regimes: one for hand-drafted briefs and one for machine-drafted briefs. Whiting says there is one regime and the lawyer is responsible inside it.
🍼 Bad fingerprint science put people in prison. Bad AI might do the same.
The Innocence Project's framing is the most clarifying single insight in this week's research: half of the organisation's wins involved wrongful convictions caused by forensic methods that turned out not to work — bite marks, hair comparison, bullet-lead matching, "shaken baby" syndrome. Each was admitted as expert evidence for decades before being debunked. The argument: AI face-recognition, predictive policing, gunshot-detection systems lack independent verification, error-rate disclosure, and the testing infrastructure that the criminal-justice system has spent thirty years insisting forensic methods provide. The pattern is recognisable. The critique is being made now, before convictions, rather than later.
🍼 The same week, OpenAI's law firm broke from AI lies, and OpenAI got sued for being a fake lawyer.
Within seven days: Sullivan & Cromwell — counsel of record to OpenAI on safe-and-ethical-AI matters — filed an emergency apology for an AI-hallucinated brief. In a separate proceeding, a complaint accused OpenAI itself of unauthorized practice of law for assisting a pro se litigant. The legal system is now adjudicating a recursive situation: lawyers using AI to advise an AI company, whose AI is allegedly practising law without a licence, while sanctioning lawyers for letting that AI write briefs. There is no clean exit from this loop without some combination of: (a) AI-vendor licensing as legal-services providers, (b) hard limits on consumer-facing legal generation, or (c) a novel doctrine that treats AI tools the way the profession already treats research assistants. None of those exist yet.
Developing stories with enough context to act on.
🍼 The federal rulebook for AI evidence is being written right now.
Status: Public comment closed 16 February 2026. Judicial Conference review and Supreme Court transmission to Congress next. Earliest effective date: 1 December 2027 if approved without change. Scope: Acknowledged AI-generated evidence only — does not yet address disputed authenticity (deepfakes). Implication: The first federal evidentiary regime for AI-generated content is a year-and-a-half away and is already incomplete. Practitioners should treat the rule's gaps — particularly authenticity disputes — as the next legislative front.
🍼 Different federal courts are making different AI rules — chaos for anyone who litigates in more than one.
What's happening: 300+ federal judges have issued individual standing orders. Some require disclosure of AI use; some require certification of citation verification; some require the specific tool to be named (ChatGPT-4, Claude, Spellbook). Bloomberg Law's tracker is the canonical reference. Implication: Multi-jurisdiction litigation now requires per-judge AI-disclosure compliance. Either the Judicial Conference will issue a uniform rule or the malpractice insurance industry will start demanding firms maintain a per-judge compliance checklist as a condition of coverage.
🍼 If a court rules ChatGPT is "practising law," half the legal-AI industry has to redesign its product.
Why it matters: Every state bar has a statutory definition of "practising law." If a court holds that a generative model providing case-specific advice to a non-lawyer crosses that line, every consumer-facing legal-AI product in the country has to add lawyer review, disclaim individual advice, or pull out of the consumer market. What we don't know yet: whether the case proceeds to ruling on the merits or settles. Watch the docket.
The technology compressing reading time has not yet learned to hold itself to account; the courts are doing it instead, one $15,000 fine at a time.