This is sort of hilarious; to use an LLM as a good search interface first build.. a search engine.

I guess this is why Kagi Quick Answer has consistently been one of the best AI tools I use. The search is good, so their agent is getting the best context for the summaries. Makes sense.

It is building a system that amplifies the strengths of the LLM by feeding it the right knowledge in the right format at inference time. Context design is both a search (as a generic term for everything retrieval) and a representation problem.

Just dumping raw reams of text into the 'prompt' isn't the best way to great results. Now I am fully aware that anything I can do on my side of the API, the LLM provider can and eventually will do as well. After all, Search also evolved beyond 'pagerank' to thousands of specialized heuristic subsystems.