Tests (and type-checkers, linters, formal specs, etc.) ground the model in reality: they show it that it got something wrong (without needing a human in the loop). It's empiricism, "nullius in verba"; the scientific approach, which lead to remarkable advances in a few hundred years; that over a thousand years of ungrounded philosophy couldn't achieve.

The scientific approach is not only or primarily empiricism. We didn't test our way to understanding. The scientific approach starts with a theory that does it's best to explain some phenomenon. Then the theory is criticized by experts. Finally, if it seems to be a promising theory tests are constructed. The tests can help verify the theory but it is the theory that provides the explanation which is the important part. Once we have explanation then we have understanding which allows us to play around with the model to come up with new things, diagnose problems etc.

The scientific approach is theory driven, not test driven. Understanding (and the power that gives us) is the goal.

> The scientific approach starts with a theory that does it's best to explain some phenomenon

At the risk of stretching the analogy, the LLM's internal representation is that theory: gradient-descent has tried to "explain" its input corpus (+ RL fine-tuning), which will likely contain relevant source code, documentation, papers, etc. to our problem.

I'd also say that a piece of software is a theory too (quite literally, if we follow Curry-Howard). A piece of software generated by an LLM is a more-specific, more-explicit subset of its internal NN model.

Tests, and other real CLI interactions, allow the model to find out that it's wrong (~empiricism); compared to going round and round in chain-of-thought (~philosophy).

Of course, test failures don't tell us how to make it actually pass; the same way that unexpected experimental/observational results don't tell us what an appropriate explanation/theory should be (see: Dark matter, dark energy, etc.!)

The ai is just pattern matching. Vibing is not understanding, whether done by humans or machines. Vibe programmers (of which there are many) make a mess of the codebase piling on patch after patch. But they get the tests to pass!

Vibing gives you something like the geocentric model of the solar system. It kind of works but but it's much more complicated and hard to work with.

Nice analogy *

I guess the current wave is going to give us Sofware Development Epicycles (SDEC?)

* All analogies are "wrong", some analogies are useful

The theory still emanated from actual observations, didn't it ?

It did but they were meaningless without a human intellect trying to make sense of them.

No, the theory comes from the authors knowledge, culture and inclinations, not from the fact.

Obviously the author has to do much work in selecting the correct bits from this baggage to get a structure that makes useful predictions, that is to say predictions that reproduces observable facts. But ultimately the theory comes from the author, not from the facts, it would be hard to imagine how one can come up with a theory that doesn't fit all the facts known to an author if the theory truly "emanated" from the facts in any sense strict enough to matter.

[dead]

It most certainly is not. All your tests are doing is seeding the context with tokens that increase the probability of tokens related to solving the problem being selected next. One small problem: if the dataset doesn't have sufficiently well-represented answers to the specific problem, no amount of finessing the probability of token selection is going to lead to LLMs solving the problem. The scientific method is grounded in the ability to reason, not probabilistically retrieve random words that are statistically highly correlated with appearing near other words.

This only holds if you understand what's in the tests, and the tests are realistic. The moment you let the LLM write the tests without understanding them, you may as well just let it write the code directly.

> The moment you let the LLM write the tests without understanding them, you may as well just let it write the code directly.

I disagree. Having tests (even if the LLM wrote them itself!) gives the model some grounding, and exposes some of its inconsistencies. LLMs are not logically-omniscient; they can "change their minds" (next-token probabilities) when confronted with evidence (e.g. test failure messages). Chain-of-thought allows more computation to happen; but it doesn't give the model any extra evidence (i.e. Shannon information; outcomes that are surprising, given its prior probabilities).

I disagree to some degree. Tests have value even beyond whether they test the right thing. At the very least they show something worked and now doesnt work or vice versa. That has value in itself.

This assumes that tests are realistic, which for the most part they are not.