Crawler Abuse
2024-07-26 17:43:56.179457+02 by Dan Lyke 2 comments
I occasionally think about finding ways to self-host video. It's not like my videos get a lot of watches, I'd rather deliver the content than just let YouTube monetize it, surely just putting it on an S3 host, or serving it via some sort of proxy from home, wouldn't be that onerous. But I've also hosted things from home before, including, ages ago, a friend's relatively low volume forum that someone decided to spider with no rate limiting, DDOSing everything.
When that shit happens on Flutterby, I do a little ipfw deny ...
and everything's fine (and have some of that automated), but the fuckwits always find some new way through, and I'm getting tired.
And, of course, I see stuff like this: Read The Docs: AI crawlers need to be more respectful:
One crawler downloaded 73 TB of zipped HTML files in May 2024, with almost 10 TB in a single day.
... with no bandwidth limiting or support for ETags or Last-Modified.
And Anthropic AI Scraper Hits iFixit’s Website a Million Times in a Day.
I think one of the huge problems we have is that either the crawler companies aren't hiring the best and the brightest (likely, because they're the ones sucked in by promises of "AI"), or there's no incentive to not fuck over the world in the mad dash.
Anyway, if I can find a way that I trust, I could see maybe doing some sort of actual user detection which does a temporarily signed S3 key that I served from... But... there's been a lot of discussion recently about the challenges with self-hosting blogs, and now Fediverse, sites, and this is just more in the "why we can't have nice things" category.