The team that runs the Common Crawl Foundation is well aware of how to crawl and index the web in real time. It's expensive, and it's not our mission. There are multiple companies that are using our crawl data and our web graph metadata to build up-to-date indexes of the web.

Yes, I've used your data myself on a number of occasions.

But you are pretty much the only people who can save the web from AI bots right now.

The sites I administer are drowning in bots, and the applications I build which need web data are constantly blocked. We're in the worst of all possible worlds and the simplest way to solve it is to have a middleman that scrapes gently and has the bandwidth to provide an AI first API.

I'm all for that.

Your terms and conditions include a lot of restrictions with some ambiguous in how they can be interpreted.

Would Common Crawl do a "for all purposes and no restrictions" license if it is for AI training, comouter analyses, etc? Especially given the bad actors are ignoring copyrights and terms while such restrictions only affect moral, law-abiding people?

Also, even simpler, would Common Crawl release under a permissive license a list of URL's that others could scrape themselves? Maybe with metadata per URL from your crawls, such as which use Cloudflare or other limiters. Being able to rescrape the CC index independently would be very helpful under some legal theories about AI training. Independent, search operators benefit, too.

Common Crawl doesn't own the content in its crawl, so no, our terms of use do not grant anyone permission to ignore the actual content owner's license.

We carefully preserve robots.txt permissions in robots.txt, in http headers, and in html meta tags.

We do publish 2 different url indexes, if you wanted to recrawl for some reason.

I was talking about CC's Terms of Use which it says applies to "Crawled Content." All our uses must comply with both copyright owners' rules and CC's Terms. The CC terms are here for those curious:

https://commoncrawl.org/terms-of-use

In it, (a), (d), and (g) have had overly-political interpretations in many places. (h) is on Reddit where just offering the Gospel of Jesus Christ got me hit with "harassment" once. The problem is whether what our model can be or is uses for incurs liability under such a license. Also, it hardly seems "open" if we give up our autonomy and take on liability just to use it.

Publishing a crawl, or the URL's, under CC-0, CC-by, BSD, or Apache would make them usable without restrictions or any further legal analyses. Does CC have permissively-licensed crawls somewhere?

Btw, I brought up URL's because transfering crawled content may be a copyright violation in U.S., but sharing URL's isn't. Are the URL's released under a permissive license that overrides the Terms of Use?

Alternatively, would Common Crawl simply change their Terms so that it doesn't apply to the Crawled Content and URL databases? And simply release them under a permissive license?

> Publishing a crawl, or the URL's, under CC-0, CC-by, BSD, or Apache would make them usable without restrictions or any further legal analyses.

This isn't true, and I can't imagine that any lawyer would agree with this statement. CCF does not have rights ownership of any of the bytes of our crawl, so we cannot grant you any rights for the bytes in our crawl. Nothing that we could say could have any relationship to this legal issue.

It's confusing to me that you say this. Your own organization claims in the Terms of Service that it has rights over the crawls, even restricting how they are used. Now, you are telling me you believe you have none or no lawyer would consider this. If so, why is "Crawled Content" and restrictions on its use in your terms of service?

Very simply, if what you say is true, then you need to change your Terms to reflect that. You have two options:

1. Take crawled content out of the Terms of Service. Put a permissive license on the crawls.

2. Modify your Terms to say "crawled content" can be used for any purpose and distributed free with no restrictions. You currently impose extra restrictions, though.

That's contract law maybe with copyright elements in it. Yet, you also appear to believe your crawls aren't copyrightable. That's a huge unknown because collections are copyrightable when sufficient creativity is put into them:

https://en.m.wikipedia.org/wiki/Copyright_in_compilation

Many collections claim a copyright or have a permissive license for this reason. Again, simply saying your crawls and URL databases are permissively licensed would solve that problem. It takes just one edit on a few, web pages.

If crawls and DB's are truly without restrictions, please put a permissive license on their respective pages. Also, please change your terms to put no restrictions on Crawled Content. Instead, it should say something like it's free to use and distribute with no warranty or liability on you. The usual stuff.

I'll emphasize again that a permissively-licensed list of all URL's you've crawled is one of the most valuable changes you could make.

You made me sad that I attempted to reply.

Edit: spelling

You told me you made no legal claims on your crawled content. You implied you wanted it to be free for all uses.

I linked to your Terms of Service which claims control of "Crawled Content" with restrictions. I asked you to remove those parts or chnage them to BSD, etc for full permissions.

You denied any legal claims existed despite your Terms and collective copyright having legal claims. I explained how you can fix that by changing your terms and download page to be permissive.

Now, you are sad that you attempted to reply? Shouldn't you be happy that a Common Crawl fan who sees great value in your work warned you about restrictive Terms and unclear licensing? What's sad about that?

I am very grateful for work that your organization does. I'd like to promote it for many public-benefit uses, from machine learning to Google alternatives. I can't do that if Terms are restrictive and licensing is unclear. Please fix it so your supporters can tell potential users that the crawls and DB's themselves are zero risk on your end.

[deleted]