The neatest thing i seen is you can put a sqlite db on a http server and read it effectively using range requests

The latency on those requests matters, though.

You'll probably benefit from using the largest possible page size; also, keep alive; etc.

But even then, you'll pull at most 64 KiB per request. If you managed to have response times of 10 ms, you'd be pulling at most 52 Mbps.

So yeah, if your queries end up reading just a couple of pages, it's great. If they require a full table scan, you need some smart prefetching+caching to hide the latency.

In my experience, this works when the db is read only.

And in these read only cases I'd use Parquet files queried with Duckdb Wasm.

so basically using the http server as a randomly-accessed data store? sounds about right