Flutterby™! (short)
Monday February 16th, 2026
Semantic Ablation
Dan Lyke /
comment 0
Claudio
Nastruzzi in The Register: Why AI writing is so generic, boring, and dangerous: Semantic
ablation
When an author uses AI for "polishing" a draft, they are not seeing
improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters
the precise points where unique insights and "blood" reside and systematically replaces
them with the most probable, generic token sequences. What began as a jagged, precise
Romanesque structure of stone is eroded into a polished, Baroque plastic shell: it looks
"clean" to the casual eye, but its structural integrity its "ciccia" has been ablated
to favor a hollow, frictionless aesthetic.
Dan Lyke /
comment 0
Abraham
Lincoln's letter to Henry L. Pierce declining an invitation to speak in Boston at a
birthday celebration honoring Thomas Jefferson (that may have been intended to be read
at the event):
This is a world of compensations; and he who would be no
slave, must consent to have no slave. Those who deny freedom
to others, deserve it not for themselves; and, under a just God,
can not long retain it.
Via.
Sweden shifts away from digital learning
Dan Lyke /
comment 0
Think Academy Education Briefs: Sweden Education Shift:
From Digital Learning to Pen and Paper.
I'm taken back to those conversations in the '90s where school board members, and
administrators, were talking about "we need tech in the classroom!", and teachers, and sane
people, were saying "what's the curriculum need?"
Especially as we've learned how students process pencil and paper note taking differently
from typed note taking. And, heck, I'm still learning how I react differently to ebooks (on
a multi-purpose device) vs paper books.
via Bruce Sterling
heavier tailed?
Dan Lyke /
comment 0
bob
@bob@feed.hella.cheap
if the LLM-generated content is adding value then distributions of
users/viewers/readers should be getting heavier tailed. are they? so far I've only seen
people talking about number of books/apps/etc published which is unrelated to
value
I suspect there's not a lot of value for the LLM wielder who's trying to push their
material out to the wider world. If you want LLM generated content, you chat with the
chatbot yourself and get a personalized experience. No real value to someone else talking
with the chatbot.
Vibe coding vulnerabilities
Dan Lyke /
comment 0
Kevin Beaumont
@GossiTheDog@cyberplace.social
Today in InfoSec Job Security News:
I was looking into an obvious ../.. vulnerability introduced into a major web
framework today, and it was committed by username Claude on GitHub. Vibe
coded, basically.
So I started looking through Claude commits on GitHub, theres over 2m of
them and its about 5% of all open source code this month.
https://github.com/search?q=au...ype=commits&s=author-date&o=desc
As I looked through the code I saw the same class of vulns being introduced
over, and over, again - several a minute.
Cutting sheet metal
Dan Lyke /
comment 0
Interesting video about using a nibbler for cutting sheet metal, including building a
nibbler table (like a router table), and using templates to get accurate repeatable cuts
with the technique. No Laser Cutter?
No Plasma Cutter? No Problem! Accurately cut sheet metal with low cost tools!
By Rebecca Valentine.
I have lost a couple of the disks for the bottoms of some tart tins, and have been trying
to figure out how to cut replacements. Need to get a nibbler and find some stainless steel
sheet.
Sunday February 15th, 2026
Ars publishes slop
Dan Lyke /
comment 0
So Ars Technica wrote a thing on the Scott Shambaugh: An AI Agent Published a Hit Piece on Me (linked earlier), except
that they used an LLM and it synthesized quotes that didn't actually get said or written.
@mttaggart@infosec.exchange
has a thread on this with receipts and archive links.
From this
thread it appears that the slop publication was inadvertent from the editor's
perspective.
Edit: Ars Technica: Editors Note: Retraction of article
containing fabricated quotations
That this happened at Ars is especially distressing. We have covered the risks
of overreliance on AI tools for years, and our written policy reflects those concerns. In
this case, fabricated quotations were published in a manner inconsistent with that policy.
We have reviewed recent work and have not identified additional issues. At this time, this
appears to be an isolated incident.
And a mea culpa from
the author, summarized by Michael Taggart:
First, this happened while sick with COVID. Second, Edwards claims this was a
new experiment using Claude Code to extract source material. Claude refused to process the
blog post (because Shambaugh mentions harassment). Edwards then took the blog post text
and pasted it into ChatGPT, which evidently is the source of the fictitious quotes.
Edwards takes full responsibility and apologizes, recognizing the irony of an AI reporter
falling prey to this kind of mistake.
On a Claude-y day
Dan Lyke /
comment 0
datarama
@datarama@hachyderm.io
2010s: There is no cloud, it's just someone else's computer.
2020s: There is no Claude, it's just someone else's code.
datarama
@datarama@hachyderm.io
2010s: Old Man Yells At Cloud
2020s: Old Man Yells At Claude
Cognitive Debt
Dan Lyke /
comment 0
A programmer's loss of identity. I guess I'm
lucky in that my association between mean and the Internet's notion of "programmer" kinda
diverged when /. got funding, but this is an
interesting meditation on how the general adoption of slop prompting as "programming" is
changing the identity of those of us who think that reasoning about systems is important.
Via Baldur Bjarnason
@baldur@toot.cafe
Meanwhile, Chris Dickinson
@isntitvacant@hachyderm.io linked to Peter Naur, Programming as Theory Building
(PDF) (You may remember Naur as the "N" in BNF notation) in response to Simon Willison's
acknowledgement that LLMs separate him from the model building:
I no longer have a firm mental model of what they can do and how they work,
which means each additional feature becomes harder to reason about, eventually leading me to
lose the ability to make confident decisions about where to go next.
In linking to Margaret
Storey's How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive
Debt (which also links to the Naur piece).
In response to Simon's note,
Jed Brown @jedbrown@hachyderm.io wrote:
I believe the effect you describe becomes more insidious in larger projects,
with distributed developer communities and bespoke domain knowledge. Such conditions are
typical in research software/infrastructure (my domain), and the cost of recovering from
such debt will often be intractable under public funding models (very lean; deliverables
only for basic research, not maintenance and onboarding). Offloading to LLMs interferes not
just with the cognitive processes of the "author", but also that of maintainers and other
community members.
Unlike reports from ChatGPT
Dan Lyke /
comment 0
Unlike reports from ChatGPT, Google's "AI" seems smart enough to know that I'd have to
drive my car to the car wash. Unless, of course, I was going to use a self-service bay.
Inspired by this
thread.
First of all what did Oregon do to the
Dan Lyke /
comment 0
First of all, what did Oregon do to the person who named the "Oregon Grape" after it. Second, I now have Opinions about the landscape designer who recommended it.
It finally sprawled enough that Charlene said she wanted it out, and I suspect I'll be following runners all summer...
Flutterby&tm;! is a trademark claimed by Dan Lyke for the web publications at www.flutterby.com and www.flutterby.net.
Last modified: Thu Mar 15 12:48:17 PST 2001