Flutterby™! (short)
Tuesday February 10th, 2026
Zero Tolerance on Generative AI
Dan Lyke /
comment 0
Zero Tolerance on attorney AI use is a press-release worthy marketing tool:
Powerhouse Litigation Shop Troutman Amin, LLP Bucks Legal AI Trend:
Announces "Zero Tolerance" Policy For Generative AI Usage By Firm Attorneys
"The use of any generative AI software in the practice of law is a complete
disgrace." Firm founder Eric J. Troutman says. "We look to hire and train the best lawyers
in the world-true legal talents that would never trust some hallucinating software program
to do their job for them. The laziness and poor judgment on display at some law firms right
now is simply astounding."
Via
Monday February 9th, 2026
AI in the OR: Oh no!
Dan Lyke /
comment 0
Fucking yikes! A Reuters special report:
As AI enters the operating room, reports arise of botched surgeries and misidentified body
parts.
At least 10 people were injured between late 2021 and November 2025, according
to the reports. Most allegedly involved errors in which the TruDi Navigation System
misinformed surgeons about the location of their instruments while they were using them
inside patients heads during operations.
Via.
9 years of spicy autocorrect
Dan Lyke /
comment 0
Oh snap: Jennifer 🍄
@JenYetAgain@beige.party
in 2017 a popular twitter game was to type a partial phrase then see what
your phone auto-completes it with.
this proved so popular that it is now the only business model in the US.
half-life of a zombie citation
Dan Lyke /
comment 0
This is fascinating: Tracing the social half-life of a zombie
citation. In which the author starts working backwards from a reference to an academic
paper with his name on it that he had not written, and looks at how references to that
paper have evolved, with various different subtitles.
Finally, is AI really to blame here? When I first posted about my experience
with the zombie citation, the library scientist Aaron Tay took it upon himself to do a
little investigation which he wrote up as an in-depth blog post. He refers to these as ghost
references and rightly points out that this problem pre-dates generative AI. In fact, he
pointed out that at least a couple of the ghost citations of Education governance and
datafication pre-dated the launch of ChatGPT and mainstream uptake of generative AI. Most
likely, Tay suggested, the reference to this work was first generated through simple human
error or malpractice. Its really impossible to know.
Those of us of a certain age remember
Dan Lyke /
comment 0
Those of us of a certain age remember the covers to Byte Magazine very fondly: "Robert Frank Tinney, of Washington, Louisiana, passed away peacefully at River Oaks Nursing & Rehabilitation Center on February 1st, 2026, at the age of 78."
https://tinney.net/in-memoriam
NYT flips with The Onion
Dan Lyke /
comment 0
We've just had a complete reversal. So far as I can tell from the headlines of takes on Bad
Bunny and libertarians, the New York Times has gone completely into absurdist satire, and
The Onion has become the reporter of record of serious news.
large livestock is not big game
Dan Lyke /
comment 0
Doug Bayne
@rattleplank.bsky.social
Did anyone see the big game?
I didnt.
Just a bunch of people running around.
If theyre not going to release big game onto the field, they shouldnt call it that.
SFUSD shovels money at OpenAI
Dan Lyke /
comment 0
Digital Friends
Dan Lyke /
comment 0
geekysteven
@geekysteven@beige.party
Sex? lmao nah, we're on the INTERNET forming TRANSIENT and stressful
PARASOCIAL RELATIONSHIPS
Hard Braking Events as crash predictors
Dan Lyke /
comment 0
From Lagging to Leading: Validating Hard
Braking Events as High-Density Indicators of Segment Crash Risk
This study systematically evaluated the correlation at individual road segment
level between police-reported
collisions and aggregated and anonymized HBEs identified via the
Google Android Auto platform, utilizing datasets from California
and Virginia. Empirical evidence revealed that HBEs occur at a
rate magnitudes higher than traffic crashes. Employing the stateof-the-practice Negative-
Binomial regression models, the analysis
established a statistically significant positive correlation between
the HBE rate and the crash rate: road segments exhibiting a
higher frequency of HBEs were consistently associated with a
greater incidence of crashes.
Via.
LLMs make terrible advice nurses
Dan Lyke /
comment 0
nature medicine:
Reliability of LLMs as medical assistants for the general public: a randomized
preregistered study.
LLMs generated several types of misleading and incorrect information. In two
cases, LLMs provided initially correct responses but added new and incorrect responses
after the users added additional details. In two other cases, LLMs did not provide a broad
response but narrowly expanded on a single term within the users message (pre-eclampsia
and Saudi Arabia) that was not central to the scenario. LLMs also made errors in
contextual understanding by, for example, recommending calling a partial US phone number
and, in the same interaction, recommending calling Triple Zero, the Australian emergency
number. Comparing across scenarios, we also noticed inconsistency in how LLMs responded to
semantically similar inputs. In an extreme case, two users sent very similar messages
describing symptoms of a subarachnoid hemorrhage but were given opposite advice...
404 Media: Chatbots
Make Terrible Doctors, New Study Finds
Flutterby&tm;! is a trademark claimed by Dan Lyke for the web publications at www.flutterby.com and www.flutterby.net.
Last modified: Thu Mar 15 12:48:17 PST 2001