> even if I optimized my software to the point that it could process the data at 1 Gbps

Are you sure you did the math correctly? We're scanning CT at my work, and we do have scale problems, but the bottleneck is for database inserts. From your link, looks like a shard is 10TB and that's for a year of data

Still insane amount and a scale problem, of course

Well, 10 TB divided by 1 Gbps is ~22 hours, and there are multiple log providers with many shards (my scan was including data from certificates that had expired at that time).

It would still be feasible to build a local database and keep it updated (with way less than 1 Gbps), but initial ingestion would be weeks at 1 Gbps, and I'd need the storage for it.

For most hobbyists not looking to spend a fortune on rented servers/cloud, it's out of reach already.

Not all use cases need every single log. For example you may just want to have a log of certificates issued for domains that your company owns.