Weird, considering IA has most of its content in a way you could rehost it all idk why nobody’s just hosting a IA carbon copy that AI companies can hit endlessly, and then cutting IA a nice little check in the process, but I guess some of the wealthiest AI startups are very frugal about training data?

This also goes back to something I said long ago, AI companies are relearning software engineering poorly. I can think of so many ways to speed up AI crawlers, im surprised someone being paid 5x my salary cannot.

Unless regulated, there is no incentive for the giants to fund anything.

There is no problem that cannot be solved with creating a bureaucracy and paperwork!

I understand this is tongue-in-cheek, but do you have an alternative/better proposal?

Let the market do. If good data is so critical to the success of AI, AI companies will pay for it. I don't know how someone can still entertain the idea that a bureaucrat, or worse, a politician, is remotely competent at designing an efficient economy.

All the world's data was critical to the success of AI. They stole it and fought the system to pay nothing. Then settled it for peanuts because the original creators are weak to negotiate. It already happened.

No they won't pay for it, unless they believe it's in their best interests. If they believe they can free-ride and get good data without having to pay for it, why would they lay down a dollar?

Because the companies in control of that data won't let them have it for free, like what is happening in the article.

Or, they'll just create more technically sophisticated workarounds to get what they want while avoiding a bad precedent that might cost them more money in the long run. Millions for defense, not one cent for tribute.

Now apply the same logic to laws, except that laws are a lot slower to change when they find the next workaround.

And it's a lot harder to get the law to stop doing something once it proves to cause significant collateral damage, or just cumulative incremental collateral damage while having negligible effectiveness.

That already exists, it's called Common Crawl[1], and it's a huge reason why none of this happened prior to LLMs coming on the scene, back when people were crawling data for specialized search engines or academic research purposes.

The problem is that AI companies have decided that they want instant access to all data on Earth the moment that it becomes available somewhere, and have the infrastructure behind them to actually try and make that happen. So they're ignoring signals like robots.txt or even checking whether the data is actually useful to them (they're not getting anything helpful out of recrawling the same search results pagination in every possible permutation, but that won't stop them from trying, and knocking everyone's web servers offline in the process) like even the most aggressive search engine crawlers did, and are just bombarding every single publicly reachable server with requests on the off chance that some new data fragment becomes available and they can ingest it first.

This is also, coincidentally, why Anubis is working so well. Anubis kind of sucks, and in a sane world where these companies had real engineers working on the problem, they could bypass it on every website in just a few hours by precomputing tokens.[2] But...they're not. Anubis is actually working quite well at protecting the sites it's deployed on despite its relative simplicity.

It really does seem to indicate that LLM companies want to just throw endless hardware at literally any problem they encounter and brute force their way past it. They really aren't dedicating real engineering resources towards any of this stuff, because if they were, they'd be coming up with way better solutions. (Another classic example is Claude Code apparently using React to render a terminal interface. That's like using the space shuttle for a grocery run: utterly unnecessary, and completely solvable.) That's why DeepSeek was treated like an existential threat when it first dropped: they actually got some engineers working on these problems, and made serious headway with very little capital expenditure compared to the big firms. Of course they started freaking out, their whole business model is based on the idea that burning comical amounts of money on hardware is the only way we can actually make this stuff work!

The whole business model backing LLMs right now seems to be "if we burn insane amounts of money now, we can replace all labor everywhere with robots in like a decade", but if it turns out that either of those things aren't true (either the tech can be improved without burning hundreds of billions of dollars, or the tech ends up being unable to replace the vast majority of workers) all of this is going to fall apart.

Their approach to crawling is just a microcosm of the whole industry right now.

[1]: https://en.wikipedia.org/wiki/Common_Crawl

[2]: https://fxgn.dev/blog/anubis/ and related HN discussion https://news.ycombinator.com/item?id=45787775

Thanks for the mention of Common Crawl. We do respect robots.txt and we publish an opt-out list, due to the large number of publishers asking to opt out recently.

There's a bit of discussion of Common Crawl in Jeff Jarvis's testimony before Congress: https://www.youtube.com/watch?v=tX26ijBQs2k

So perhaps the AI companies will go bankrupt and then this madness will stop. But it would be nice if no government intervenes because they are "too big to fail".

Are you sure it's the AI companies being that incompetent, and not wannabe AI companies?

What I feel is a lot more likely is that OpenAI et al are running a pretty tight ship, whereas all the other "we will scrape the entire internet and then sell it to AI companies for a profit" businesses are not.

They run a tight AI ship but it is in their interest to destroy the web so that people can only get to data through their language model

OpenAI cannot possibly running a tight ship, even if they have competent scientists and engineers.