Why is modifying weights sensibly impossible? Is it because a modification's "sensibility" is measurable only post facto, and we can have no confidence in any weight-based hypothesis?
Why is modifying weights sensibly impossible? Is it because a modification's "sensibility" is measurable only post facto, and we can have no confidence in any weight-based hypothesis?
Just doesn't feel like current LLMs, the thing would be able to understand its own brain enough to make general improvements with high enough bar to be able to non-trivially improvements.