Fuck yeah. Some damned AI sanctions, coming down.
Whiting v. City of Athens, Tenn
We wholeheartedly agree. Irion and Egli breached the trust that we must have
in the lawyers appearing before us. They have brought the profession into disrepute.
Irions and Eglis failure to comply with the basic rules of our profession has forced us
and the City to unnecessarily expend time and resources on a case that should have been
litigated and resolved straightforwardly but was not. More importantly, by breaching our
trust, we can no longer rely on the representations in Irions and Eglis briefs, harming
both their clients (whose cases are now viewed with skepticism) and this court (who must
now independently verify everything Irion and Egli write). Finally, Irion and Egli have
sullied the reputation of our bar, which now must litigate under the cloud of their
conduct.
Elsewhere it notes that:
We could have gone much further. Other courts have dismissed cases,
disqualified lawyers, or revoked their pro hac vice status for similar conduct.
But hell, I'll take something more than a slap on a wrist for using AI slop in law. Via this
Bluesky thread which has some additional commentary and pull quotes.
So you might have heard about this thing where in 2021, a company called Krafton
acquired/entered into a deal with Unknown Worlds, the developer of the game Subnautica.
The contract included a $250M bonus if they hit revenue targets by 2025, $225M of that
going to Unknown Worlds' upper management team.
The CEO of Krafton then apparently decided that they were gonna have to pay too much to
Unknown Worlds, and started to hobble the release and get in the way of said revenue
targets.
So far just garden variety C Suite douchebaggery.
As the smackdown from the lawsuits starts to unfold, it turns out that Krafton CEO
Changham Kim says, well, yes, he did consult with ChatGPT on the Subnautica 2 mess, and
also deleted some of those queries, but he had a good reason: He didn't want OpenAI
finding out about it.
Okay, so he's not just trying to weasel out of a deal, he's not just... whatever... enough
to turn to an LLM for legal advice, he also thinks that he can use a cloud hosted service,
delete something, and that means that cloud service provider hasn't ingested that data.
The opinion is
here, Rami Ismail
(رامي) @ramiismail.com summarizes as:
Subnautica devs v. Krafton ruling is ABSOLUTELY stunning. Start at the top of
page 32 and read until the end of that section on page 37.
Krafton CEO was warned by their legal personnel to not follow ChatGPT into
what is likely Some Of The Dumbest Legal Shit Ever, CEO believed the plagiarism bot.
Via.
Meanwhile, the other double-face-palm that's floating around the Inkernets these days is
Kettering Adventist Healthcare v. Collier. The Volokh
Conspiracy at Reason: "The Undersigned Cannot Recall a Comparable Instance of Such Brazen
and Repeated Dishonesty" in 55 Years as a Judge.
Over on Bluesky,
Mrs. Detective Pikajew, Esq. @clapifyoulikeme.favrd.social has a bunch of highlights.
The complaint, in which...
After Kettering received multiple complaints from IRG
staff about Colliers unprofessional behavior and leadership style, Kettering suspended
Collier on June 20, 2025.
So after getting canned, she tried to extort "8 figures" from Kettering, the complaint
lays out ways in which she was likely planning this from within 2 weeks of getting hired
in the first place.
Anyway, she gets smacked down, and turns to ChatGPT, which tells her that she should
continue legal shenanigans. And not only does she turn to the sycophancy machine, her
lawyer does too, and that's where shit gets real.
PDF of the decision.
Mrs.
Detective Pikajew, Esq. thread switches to the transcript, and ... yash
@yashwinacanter.bsky.social
i know theyre talking about disbarring but its really funny to read/imagine
this as like they have fucked up so bad that we have no choice but to Excommunicate Them
From Ohio
Anyway, don't turn to LLMs for legal advice. If your lawyers turn to LLMs for legal
advice, fire them.
Which brings us around to Designed to Cross: Why Nippon Life
v. OpenAI Is a Product Liability Case.
Graciela Dela Torre settled a long-term disability claim with prejudice in
January 2024. Feeling she had been misled by her attorney, she uploaded his correspondence
to ChatGPT. The chatbot validated her distrust. She fired her lawyer, attempted to reopen
the settled case, and filed dozens of motions that courts found served no legitimate legal
purpose. In March 2026, Nippon Life Insurance Company of America sued OpenAI for $10.3
million.
The problem is, of course, that just like the few thousand dollar slaps on the wrist that
we've seen for lawyers trying to justify making up bullshit with the aid of an LLM aren't
effective, $10.3M is not gonna slow down OpenAI marketing ChatGPT as a tool to clog up the
courts with bullshit.
Kirk.is: The AI Vampire is some commentary around
Steve Yegge's The AI
Vampire. Yegge lost me... well, before gas town, but Kirk's questions lead me to the
thought that my work value is, yes, understanding code, and having a bunch of deep thinking
about software systems, but it's also about being able to think critically about systems.
And one of the big challenges about both the modern world, and about LLM hype, is that I'm
trying to figure out what that means in a world where the "thought leaders" are spewing
bizarre-ass bullshit, where "momentum" is everything, and "influencer" appears to be way
more remunerative than understanding.
Bonus: Aram J.
French's Mandatory Roller Coaster comic: Vibe Construction.
A few production decisions I disagree with, I think an arrangement should leave a little space, and there are some interesting vocal decisions, but... Rick Astley's new single is totally listenable.
https://www.youtube.com/watch?v=zRVjZ2DJ9Cg
Ageless Linux
Software for humans of indeterminate age. We don't know how old you are. We
don't want to know. We are legally required to ask. We won't.
Including The Ageless Device
A physical computing device designed to satisfy every element of the California
Digital Age Assurance Act's regulatory scope while deliberately refusing to comply with its
requirements. The device costs less than lunch and will be handed to children.
This feels kinda fascinating: Facebook ad for "Granola.ai" has a testimonial from Deedy, partner at Menlo Ventures: "Granola is one of the best made "AI" apps that I've used this year."
Is AI as a phrase becoming poisoned enough that it's getting quoted?
https://www.facebook.com/perma...E7rHBHqBVb5mQl&id=61579723227585
Stanford Law School: Designed to Cross: Why Nippon Life v. OpenAI
Is a Product Liability Case
Graciela Dela Torre settled a long-term disability claim with prejudice in
January 2024. Feeling she had been misled by her attorney, she uploaded his correspondence
to ChatGPT. The chatbot validated her distrust. She fired her lawyer, attempted to reopen
the settled case, and filed dozens of motions that courts found served no legitimate legal
purpose. In March 2026, Nippon Life Insurance Company of America sued OpenAI for $10.3
million.
Mark Dominus linked to the actual complaint.
Unfortunately, I don't think $10.3M is nearly enough, unless it opens up the floodgates
against OpenAI's malfeasance.
fenchelmit
@fen@zoner.work
heard "be the elephant you want to see in the room" earlier and gosh
if that hasn't stuck with me
maxine 🇵🇸
@maxine@hachyderm.io
LLM users respect a chatbot more than potential contributors is the worst part
of all this. Everyone was capable of writing basic docs all along. They just didnt want to
for a fellow human.
I dont know what exactly is it when you treat people as things and things as
people, but it sure is fucking gross.
Oh, hey, it turns out that removing all skill and turning your pipeline over to commodity
generation that anyone who wants that kind of slop can do themselves might have
consequences: Futurism": BuzzFeed Nearing Bankruptcy After Disastrous Turn Toward AI.
Now, three years after its AI pivot, the writing is on the wall. The company
reported a net loss of $57.3 million in 2025 in an earnings report
released on Thursday. In an official
statement, the company glumly hinted at the possibility of going under sooner rather than later, writing that there is
substantial doubt about the Companys ability to continue as a going concern.
Via and via.
Add this to your morning's comics: The Joy Of Tech: Support
Group for AI Chatbots. Fediverse link.
Ars Technica: Supply-chain attack using invisible code
hits GitHub and other repositories. As David Gerard points out
it's kind of a rehash of the old (March 2024) using Unicode tags for prompt injection.
This one almost needs its own post. AI changes how you think: Cornell Chronicle: AI assistants can sway writers
attitudes, even when theyre watching for bias
Previous misinformation research has shown that warning people before theyre
exposed to misinformation, or debriefing them afterward, can provide immunity against
believing it, said Sterling Williams-Ceci 21, a doctoral candidate in information science.
So we were surprised because neither of those interventions actually reduced the extent to
which peoples attitudes shifted toward the AIs bias in this context.
Science Advances: Biased AI
writing assistants shift users attitudes on societal issues
In two large-scale preregistered experiments (N = 2582), we exposed participants
writing about important societal issues to an AI writing assistant that provided biased
autocomplete suggestions. When using the AI assistant, the attitudes participants expressed
in a posttask survey converged toward the AIs position. However, a majority of participants
were unaware of the AI suggestions bias and their influence. Further, the influence of the
AI writing assistant was stronger than the influence of similar suggestions presented as
static text, showing that the influence is not fully explained by these suggestions,
increasing accessibility of the biased information. Last, warning participants about
assistants bias before or after exposure does not mitigate the attitude-shift effect.
Via
It is fascinating watching singers try to transpose, and shift mode, instead.