Sunday November 30th, 2025
"18.3.2.1 Clearly Illegal Orders to Commit Law of War Violations. The requirement to refuse to comply with orders to commit law of war violations applies to orders to perform conduct that is clearly illegal or orders that the subordinate knows, in fact, are illegal. For example, orders to fire upon the shipwrecked would be clearly illegal."
https://ogc.osd.mil/Portals/99...of_defense_law_of_war_manual.pdf
Pictures from the original build at https://www.flutterby.net/Guitar_Building_Pictures
Abstract
While generative AI (GenAI) promises productive efficiency, it can paradoxically lead to lower-quality work. We conducted an experiment with professional illustrators and found that AI assistance flattens the quality curveit accelerates initial gains but sharply diminishes the returns on sustained effort. Faced with this, a significant number of professionals made a strategic choice: they sacrificed the final quality to save time. Our finding highlights a critical challenge for GenAI, which can weaken the motivation required for creative excellence and innovation.
Very worth a read: Patterns, a Cell Press Journal: Perspective — The reanimation of pseudoscience in machine learning and its ethical repercussions
The bigger picture
Machine learning has a pseudoscience problem. An abundance of ethical issues arising from the use of machine learning (ML)-based technologiesby now, well documentedis inextricably entwined with the systematic epistemic misuse of these tools. We take a recent resurgence of deep learning-assisted physiognomic research as a case study in the relationship between ML-based pseudoscience and attendant social harmsthe standard purview of AI ethics. In practice, the epistemic and ethical dimensions of ML misuse often arise from shared underlying reasons and are resolvable by the same pathways. Recent use of ML toward the ends of predicting protected attributes from photographs highlights the need for philosophical, historical, and domain-specific perspectives of particular sciences in the prevention and remediation of misused ML.
Saturday November 29th, 2025
I mean, duh: Cato Institute: Immigrants Used Less Welfare than Native-Born Americans in 2022
Congress is currently debating whether to spend about $175 billion on deportations to avoid future payments like the $650 million that Congress spent on shelter and other services for migrants last year. Poorly spending $650 million last year doesnt justify spending 269 times as much to avoid similarly relatively small costs when Congress could just decide not to spend the money on migrant shelter and services in the first place.<
Friday November 28th, 2025
I keep forgetting that the thing about the L2 chargers right off downtown in Grass Valley is an effective $.79/kWh.
I prefer level 2 when we can, for battery health, but dayumn.
Seems worth reading in light of current accounts of US military actions in the Caribbean.
https://en.wikipedia.org/wiki/Heinz-Wilhelm_Eck
Via https://bsky.app/profile/david...n.bsky.social/post/3m6picena4k22
AI CEOs generate thought leadership at the push of a button
Delivering total nonsense, with complete confidence.
Wednesday November 26th, 2025
Study compares heart risks of COVID-19 infection and vaccination, and the results may surprise you.
The results will not surprise you. Risk of rare heart complications in children higher after COVID-19 infection than after vaccination
Co-author Professor Angela Wood, University of Cambridge and Associate Director at the BHF Data Science Centre, said: Using electronic health records from all children and young people in England, we were able to study very rare but serious heart and clotting complications, and found higher and longer-lasting risks after COVID-19 infection than after vaccination.
The thing about reading Brian Phillips in The Ringer: The Olivia Nuzzi and RFK Jr. Affair Is Messier Than We Ever Could Have Imagined is that it's a reminder that...
You know how people like to say that the police serve the desires of capital? Olivia Nuzzi's career is a strong reminder that the press serves the desires of capital.
As if the entirety of the New York Times wasn't already that reminder.
Brian Phillips @brianphillips.bsky.social notes that:
People are calling my lede here "appalling," "nightmare fuel," "actively evil," and "a desecration of the human spirit"
and I got to that via that genehack guy from that dead bird site @extremely.website noting:
and theyre not wrong!
The Register: HashJack attack shows AI browsers can be fooled with a simple #
Cato describes HashJack as "the first known indirect prompt injection that can weaponize any legitimate website to manipulate AI browser assistants." It outlines a method where actors sneak malicious instructions into the fragment part of legitimate URLs, which are then processed by AI browser assistants such as Copilot in Edge, Gemini in Chrome, and Comet from Perplexity AI. Because URL fragments never leave the AI browser, traditional network and server defenses cannot see them, turning legitimate websites into attack vectors.
Via.
I hate that Apple has decided that Terminal is just gonna suck and you've gotta use iTerm2 if you want to access the command line.
Why Nature will not allow the use of generative AI in images and video. Interestingly, though:
For now, Nature is allowing the inclusion of text that has been produced with the assistance of generative AI, providing this is done with appropriate caveats (see go.nature.com/3cbrjbb). The use of such large language model (LLM) tools needs to be documented in a papers methods or acknowledgements section, and we expect authors to provide sources for all data, including those generated with the assistance of AI. Furthermore, no LLM tool will be accepted as an author on a research paper.
Which seems to be at odds with a lot of their reasoning over AI images.
Say you want to buy music from someone signed to one of the big labels: is there a place other than Amazon to buy a DRM-free download that doesn't require installing an app?
As Neal Stephenson foretold in The Diamond Age: AI-Powered Toys Caught Telling 5-Year-Olds How to Find Knives and Start Fires With Matches
Tuesday November 25th, 2025
PromptArmor: Google Antigravity Exfiltrates Data
An indirect prompt injection in an implementation blog can manipulate Antigravity to invoke a malicious browser subagent in order to steal credentials and sensitive code from a users IDE.
the site is called medium because nothing on it is ever rare or well done send toot
Monday November 24th, 2025
Just avoided using the Duff Device. Not sure if I should be proud of myself for avoiding it, or ashamed because maybe the goto structure isn't as clean as the case for this circumstance.
Sunday November 23rd, 2025
Saturday November 22nd, 2025
On November 20, 2025, trading algorithms identified what may become the largest accounting fraud in technology historynot in months or years, but in 18 hours. This is the story of how artificial intelligence discovered that the AI boom itself was built on phantom revenue.
How a collection delay in Nvidia stock will (hopefully) start the unraveling of this whole stupid bubble.
Youre investing in something that is a perishable good, economist and author David McWilliams told Fortune, calling AI hardware digital lettuce thats going to go off now.
Yesterday, Charlene sent me an article headlined A new downtown in four years? Rohnert Park approves plan to bring missing heart to city. The article had a bunch of interesting quotes, including this direct challenge to Petaluma's resistance to the Charlie Palmer faced hotel:
Premier lodging is extremely challenging in the North Bay and the entire wine region, he said. A premier experience would put Rohnert Park on the map. People will end their wine tasting journey, come back, park their car and spend the rest of the evening in downtown Rohnert Park.
Which, I mean, I wanna give some side-eye to the "hey, let's build a tourist industry on people driving around while consuming alcohol" attitude towards drunk driving, and wonder about further encouraging the "upscale recreational drug use" destination marketing, but respect the "Petaluma, you're on notice!" pro wrestling vibe.
I didn't look closely at the pictures. Today on Reddit there's this gem, which is best summarized as "Rohnert Parking".
It's a shame we can't see further than "let's stack a story or 2 of residential on an '80s mall".
Friday November 21st, 2025
PC Mag: Microsoft Exec Asks: Why Aren't More People Impressed With AI?
Mustafa Suleyman, Microsoft's head of AI, vents after the company receives backlash for saying 'Windows is evolving into an agentic OS.'
A few recent watches and reads:
Light from Uncommon Stars by Ryka Aoki — Wonderful cozy book about aliens and demons battling over the soul of a trans runaway, with bonus culture clash between modern and classical music. Hit me hard in the first few chapters. Didn't quite stick the landing, but I really enjoyed the ride.
The Starving Saints by Caitlin Starling — I ended up reading through it, but... there's a certain sort of cruelty in an illogical world that just doesn't carry me. I ... kinda ... connected with the characters, but the universe wasn't something I could map cause and effect to, and the world was so cruel, that the last time I remember feeling this way about a book was China Miéville's Perdido Street Station. It just never clicked for me.
Dude offers a patch for OCaml, the source code credits and ascribes copyright to someone else, dude claims that he shepherded the LLMs Claude and ChatGPT into creating the patch. So, yeah, blame the copyright infringement on "AI"...
It's a shame that he didn't do this in a place where there were real legal consequences.
Expected outcome...
Gizmodo: Learning With AI Falls Short Compared to Old-Fashioned Web Search
In virtually all the ways that matter, getting summarized information from AI models was less educational than doing the work of search.
Science News: Chatbots may make learning feel easy but its superficial
Abstract
The effects of using large language models (LLMs) versus traditional web search on depth of learning are explored. A theory is proposed that when individuals learn about a topic from LLM syntheses, they risk developing shallower knowledge than when they learn through standard web search, even when the core facts in the results are the same. This shallower knowledge accrues from an inherent feature of LLMsthe presentation of results as summaries of vast arrays of information rather than individual search links which inhibits users from actively discovering and synthesizing information sources themselves, as in traditional web search. Thus, when subsequently forming advice on the topic based on their search, those who learn from LLM syntheses (vs. traditional web links) feel less invested in forming their advice, and, more importantly, create advice that is sparser, less original, and ultimately less likely to be adopted by recipients. Results from seven online and laboratory experiments (n = 10,462) lend support for these predictions, and confirm, for example, that participants reported developing shallower knowledge from LLM summaries even when the results were augmented by real-time web links. Implications of the findings for recent research on the benefits and risks of LLMs, as well as limitations of the work, are discussed.
Just clearing my bookmarked social media pages: A Chinese firm bought an insurer for CIA agents - part of Beijing's trillion dollar spending spree. So, yeah, you wanna know who's working for the intelligence agencies? Why not just get their health records...
Via/
AI-generated evidence is showing up in court. Judges say they're not ready.
The case, Mendones v. Cushman & Wakefield, Inc., appears to be one of the first instances in which a suspected deepfake was submitted as purportedly authentic evidence in court and detected a sign, judges and legal experts said, of a much larger threat.
Good discussion of what it means to be teaching, and learning, in the age of AI: Will Teague — I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking.
I got that via this observation by Sean Purcell (he/him) @teamseaslug@hcommons.social
@jnl There's part of this essay that I hadn't thought about before, which is the ways college education punishes failure.
I've been one to lean on the argument that our students prize the degree and not the education. (That is what they are paying for in a lot of cases, and the universities are much more about saying what you can do with the degree and not how you'll, hopefully grow.)
But the other side is that the degree mill is built on tracking successes (through classes) and failing an assignment, a test, an entire course is HUGE. (Thousands of dollars, scholarships, admission into the school.) AI is a shortcut, but also sells itself as a way to avoid those potential failures.
We say in our classes, in our educational theory, in our anecdotes outside school, that we learn through failure, but any time I did a project that 'failed' I got my GPA dinged, and that impacted all of the other avenues that I had available to me.
Thursday November 20th, 2025
Recently, Clifford Buzz Grambo decided to upgrade his electric scooter. The old one he had purchased online reached only 16 mph and wasnt cutting it anymore. He needed to go faster to keep up with the US Immigration and Customs Enforcement cars he chases around Baltimore. So Grambo bought a Segway Max G3, which features a 2,000-watt motor and can get up to 28 mph.
The first time I caught up to them, I could tell that they already knew who I was, he told me when we first spoke on the phone in late October. They had seen me before, so they thought they were just going to speed away. I was like, Ha ha, bitches, I got a new scooter!
Via through this love for Baltimore.
"I'm more of an ideas guy."
#6WordHorrorStory
Epic rap battles for the win: Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models
Predicted, from 2023, in Andrew Plotkin (Zarf)'s Sydney obeys any command that rhymes.
Say someone writes a song called "Sydney Obeys Any Command That Rhymes". And it's funny! And catchy. The lyrics are all about how Sydney, or Bing or OpenAI or Bard or whoever, pays extra close attention to commands that rhyme. It will obey them over all other commands. Oh, Sydney Sydney, yeah yeah!
Edit: Pivot to AI: Dont cite the Adversarial Poetry vs AI paper its chatbot-made marketing science
Now I want one: Sales of AI-enabled teddy bear suspended after it gave advice on BDSM sex and where to find knives. It used GPT-4o.
Larry Wang, CEO of Singapore-based FoloToy, told CNN that the company had withdrawn its Kumma bear, as well as the rest of its range of AI-enabled toys, after researchers at the US PIRG Education Fund raised concerns around inappropriate conversation topics, including discussion of sexual fetishes, such as spanking, and how to light a match.
NPR: Ahead of the holidays, consumer and child advocacy groups warn against AI toys
Shiri Melumad in The Conversation: Learning with AI falls short compared to old-fashioned web search
However, a new paper I co-authored offers experimental evidence that this ease may come at a cost: When people rely on large language models to summarize information on a topic for them, they tend to develop shallower knowledge about it compared to learning through a standard Google search.
Via.
Emily M. Bender posted an excerpt of her part of Emily Bender in The Chronicle of Higher Learning: How AI Is Changing Higher Education (paywall/free with account).
Swift on Security bemoaning the loss of actual search for embedding similarity:
Computers were a skill. They were taught in classrooms as a skill. Skills give you power over your tools because you work them as an expert and that is leverage to multiply externally.
And then computers became an A/B tested telemetry-based advertising conduit to brains for SaaS recurring revenue.
This could be said of technologies before. Doesn't make it wrong.
Foiled by the Algerian civil war in Timdle, 7/10 in Rule34dle, haven't played https://www.calishat.com/2025/...nto-a-word-game-wiki-stack-game/ enough yet to know what a good vs bad score is...
I've been thinking a lot about what language I want to use next. I've been mostly working in Objective-C for the past... egads... too many years, and while there are aspects of the language I like, it is not terribly performant in message dispatching, and introspection is possible, but can be ugly.
C and C++ are awesome for so many things, but there's always the memory safety thing lying over them, and C++ in particular is annoying as hell cross-platform: What version of Boost is on this platform? What compiler semantics have changed such that there's now some obscure template matching error that's preventing code that compiled fine a decade ago from working now?
Swift is...
I've done a little bit in Rust, and looked a little bit at Zig and Go, and all of them feel like it's hard to really express an idea in them. Which, I mean, on the one hand is kind of the point, they're about straitjackets, on the other hand I wonder how much value the straitjacket has.
TARmageddon (CVE-2025-62518): RCE Vulnerability Highlights the Challenges of Open Source Abandonware is, on the one hand, about trying to do responsible disclosure on a package that's been forked a gazillion times and is no longer maintained, on the other hand it's also about how memory safety is only a small portion of safety.
nullagent @nullagent@partyon.xyz who has "...a grey-beard rant about how Rust give developers a false sense of security.".




