> However, LLM would also require >75% of our galaxy energy output to reach 1 human level intelligence error rates in general.
citation needed
> However, LLM would also require >75% of our galaxy energy output to reach 1 human level intelligence error rates in general.
citation needed
The activation capping effect on LLM behavior is available in this paper:
https://www.anthropic.com/research/assistant-axis
The estimated energy consumption versus error rate is likely projected from agent test and hidden-agent coverage.
You are correct, in that such a big number likely includes large errors itself given models change daily. =3
ok, your quote was over generalized, you meant "current LLM need..." and not "any conceivable LLM"
although the word "energy" does not appear on that page, not sure where you get the galaxy energy consumption from
In general, "any conceivable LLM" was the metric based on current energy usage trends within the known data-centers peak loads (likely much higher due to municipal NDA.) A straw-man argument on whether it is asymptotic or not is irrelevant with numbers that large. For example, 75% of a our galaxy energy output... now only needing 40% total output... does not correct a core model design problem.
LLM are not "AI", and unlikely ever will be due to that cost... but Neuromorphic computing is a more interesting area of study. =3