The closest we got to vibe coding pre-LLMs was using a language with a very good strong type system in a good IDE and hitting Ctrl-Space to autocomplete your way to a working program.
I wonder if LLMs can use the type information more like a human with an IDE.
eg. It generates "(blah blah...); foo." and at that point it is constrained to only generate tokens corresponding to public members of foo's type.
Just like how current gen LLMs can reliably generate JSON that satisfies a schema, the next gen will be guaranteed to natively generate syntactically and type- correct code.
> I wonder if LLMs can use the type information more like a human with an IDE.
Just throw more GPUs at the problem and generate N responses in parallel and discard the ones that fail to match the required type signature. It’s like running a linter or type check step, but specific to that one line.
We have infinite uranium anyway !
LLMs can use LSPs. https://en.wikipedia.org/wiki/Language_Server_Protocol
You already can use LLM engines that force generation according to an arbitrary CFG definition. I am not aware of any systems that apply that to generating actual programming language code.