[dead]

useful for adding semantic search to tiny bits of data, e.g. collections of research papers in a folder on my computer, etc.

for web stuff, e.g. community/forums/docs/small sites which usually don't even have 1M rows of data, precomputing embeddings and storing them and running on a small vector search like this somewhere is much simpler/cheaper than running external services

it's the operational hassle of not having to deal with a dozen+ external services, logins, apis, even if they're free

(I do like mixed bread for that, but I'd prefer it to be on my own lightweight server or serverless deployment)

These engagement bots are getting tiresome...

i think the question is really: can I turn my search problem into a in-process vector search problem where I can scale with the number of processes.

[flagged]