I just completed a prototype of a non-trivial product that was vibe-coded just to test the ability and limits of LLMs.

My experience aligns largely with your excellent comment.

>But the level of integrative intelligence, combined with specialized world >knowledge required for that task is really very far away from what current >models can do.

Where LLMs excel are to put out large templates of what is needed, but they are frayed at the edges. Imagine programming as a jigsaw puzzle where the pieces have to fit together. LLMs can align the broader pieces, but fail to fit them precisely.

>But they are quite bad at abstracting textual information into a more >fundamental model of program and world state and reasoning at that level.

The more fundamental model of program is a "theory" or "mental-model" which unfortunately is not codified in the training data. LLMs can put together broad outlines based on their training data, but lack the precision in modeling at a more abstract level. For example, how concurrency could impact memory access is not precisely understood by the LLM - since it lacks a theory of it.

> the technical challenge is how do you pose a training problem to force a model > to learn beyond that.

This is the main challenge - how can an LLM learn more abstract patterns. For example, in the towers of hanoi problem, can the LLM learn the recursion and what recursion means. This requires LLM to learn abstraction precisely. I suspect LLMs learn abstraction "fuzzily" but what is required is to learn abstraction "precisely". The precision or determinism is largely where there is still a huge gap.

LLM-boosters would point to the bitter lesson and say it is a matter of time before this happens, but I am a skeptic. I think the process of symbolism or abstraction is not yet understood enough to be formalized.