Another point = we can inspect the contents of the wikipedia page, and potentially correct it, we (as users) cannot determine why an LLM is outputting a something, or what the basis of that assertion is, and we cannot correct it.
Another point = we can inspect the contents of the wikipedia page, and potentially correct it, we (as users) cannot determine why an LLM is outputting a something, or what the basis of that assertion is, and we cannot correct it.
You could even download a wikipedia article, do your changes to it and upload it to 250 githubs to strengthen your influence on the LLM.
This doesn't feel like a problem anymore now that the good ones all have web search tools.
Instead the problem is there's barely any good websites left.
The problem is that the good websites are constantly scraped/botted upon by these LLM's companies and they get trained upon and users ask LLM's and not go to their websites so they either close it or enshitten it
And also the fact that its easy to put slop on the internet more than ever so the amount of "bad" (as in bad quality) websites have gone up I suppose
I dunno, works for me. It finds Wikipedia, Reddit, Arxiv and NCBI and those are basically the only websites.
[dead]