Presumably it did an actual search and summarized the results and neither answered "off the cuff" by following gradients to reproduce the text it was trained on nor by following gradients to reproduce the "logic" of reasoning. [1]
If you'd taught math or physics you'd've deduced that kids are trained the same way. What other way is there?
We're still in the early stages of "reversing natural intelligence"; we don't have much data on actual "reasoning processes". We want lean4 formalization, but we need traces (formalizations) of lean4 formalizations. You can call the bottleneck "capitalism", but I'll just call it lack of motivation (in making compute cheaper and more efficient, so that a significant portion can be redirected to productive ends --as opposed to consumerist ends[1]-- like lean4 formalization-formalization research)
Rail will eventually become too cheap to metre but meanwhile we'll have to wait for this generation of robber barons to "kill one another off" AND the coming Rockefellers to "disappear into the sunset"
[1] where "enterprise" should also be regarded as a mass of uninformed consumers. a supply side vs demand side ideological dichotomy in techno-economic policy.. grok this and you'll read less Economist (d-side) and more CPC/"Elon"[2] (s-side) propaganda
[2] an idealized Elon who is able to formalize his own thought processes
It seems like the search ai results are generally misunderstood, I also misunderstood them for the first weeks/months.
They are not just an LLM answer, they are an (often cached) LLM summary of web results.
This is why they were often skewed by nonsensical Reddit responses [0].
Depending on the type of input it can lean more toward web summary or LLM answer.
So I imagine that it can just grab the description of the „car wash” test from web results and then get it right because of that.
[0] https://www.bbc.com/news/articles/cd11gzejgz4o
Presumably it did an actual search and summarized the results and neither answered "off the cuff" by following gradients to reproduce the text it was trained on nor by following gradients to reproduce the "logic" of reasoning. [1]
[1] e.g. trained on traces of a reasoning process
If you'd taught math or physics you'd've deduced that kids are trained the same way. What other way is there?
We're still in the early stages of "reversing natural intelligence"; we don't have much data on actual "reasoning processes". We want lean4 formalization, but we need traces (formalizations) of lean4 formalizations. You can call the bottleneck "capitalism", but I'll just call it lack of motivation (in making compute cheaper and more efficient, so that a significant portion can be redirected to productive ends --as opposed to consumerist ends[1]-- like lean4 formalization-formalization research)
Rail will eventually become too cheap to metre but meanwhile we'll have to wait for this generation of robber barons to "kill one another off" AND the coming Rockefellers to "disappear into the sunset"
[1] where "enterprise" should also be regarded as a mass of uninformed consumers. a supply side vs demand side ideological dichotomy in techno-economic policy.. grok this and you'll read less Economist (d-side) and more CPC/"Elon"[2] (s-side) propaganda
[2] an idealized Elon who is able to formalize his own thought processes
It's almost certainly just RAG powered by their crawler.
Proving that RAG still matters.