If enough examples are in-distribution, the model's scroll bar implementation will work just fine. (Eventually, after the human learns what to ask for and how to ask for it.)
That is funny for how much is wrong. Ask the LLMs to vibe code a text editor and you'll get a React app using Supabase. Engineering !== Token prediction
I have used agentic coding tools to solve problems that have literally never been solved before, and it was the AI, not me, that came up with the answer.
If you look under the hood, the multi-layered percqptratrons in the attention heads of the LLM are able to encode quite complex world models, derived from compressing its training set in a which which is formally as powerful as reasoning. These compressed model representations are accessible when prompted correctly, which express as genuinely new and innovative thoughts NOT in the training set.
> I have used agentic coding tools to solve problems that have literally never been solved before, and it was the AI, not me, that came up with the answer.
Unfortunately confidentiality prevents me from doing so—this was for work. I know it is something new that hasn’t been done before because we’re operating in a very niche scientific field where everyone knows everyone and one person (me, or the members of my team) can be up to speed on what everyone else is doing.
It’s happened now that a couple of times it pops out novel results. In computational chemistry, machine learned potentials trained with transformer models have already resulted in publishable new chemistry. Those papers are t out yet, but expect them within a year.
I'm sorry you're so sour on this. It's an amazing and powerful technology, but you have to be able to adjust your own development style to make any use of it.
Ask the LLMs to vibe code a text editor, and you'll get pretty much what you deserve in return for zero effort of your own.
Ask the best available models -- emphasis on models -- for help designing the text editor at a structural rather than functional level first, being specific about what you want and emphasizing component-level test whenever possible, and only then follow up with actual code generation, and you'll get much better results.
I think this comment exposes an important point to make: people have different opinions of what "vibe coding" even means. If I were to ask an LLM to vibe code a text editor, I guarantee you I wouldn't get a React app using Supabase -- because I'd give it pages of requirements documentation and tell it not only what I want, but the important decisions on how to make it.
Obviously no model is going to one-shot something like a full text editor, but there's an ocean of difference between defining vibe coding as prompting "Make me a text editor" versus spending days/weeks going back and forth on architecture and implementation with a model while it's implementing things bottom-up.
Both seem like common definitions of the term, but only one of them will _actually_ work here.
Code is still there, but humans are done dealing with it. We're at a higher level of abstraction now. LLMs are like compilers, operating at a higher level. Nobody programs assembly language any more, much less machine language, even though the machine language is still down there in the end.
They certainly do, and I can't really follow the analogy you are building.
> We're at a higher level of abstraction now.
To me, an abstraction higher than a programming language would be natural language or some DSL that approximates it.
At the moment, I don't think most people using LLMs are reading paragraphs to maintain code. And LLMs aren't producing code in natural language.
That isn't abstraction over language, it is an abstraction over your computer use to make the code in language. If anything, you are abstracting yourself away.
Furthermore, if I am following you, you are basically saying, you have to make a call to a (free or paid) model to explain your code every time you want to alter it.
I don't know how insane that sounds to most people, but to me, it sounds bat-shit.
I've worked in 3 different WYSIWYG editors for web and desktop applications over the years, lightly contributed to a handful of other open-source editors, and spent plenty of time building my own personal editors from scratch (and am currently using gpt-5 to fix my own human bugs in a rewrite of the Notebook.ai text editor that I re-re-implemented ~8 years ago).
Editors are incredibly complex and require domain knowledge to guide agents toward the correct architecture and implementation (and away from the usual naive pitfalls), but in my experience the latest models reason about and implement features/changes just fine.
Good luck getting just scroll bar right with vibe coding. You'll be surprised how much engineering is done to get that part work smoothly.
If enough examples are in-distribution, the model's scroll bar implementation will work just fine. (Eventually, after the human learns what to ask for and how to ask for it.)
Why wouldn't it?
Most programs today regularly have bugs with scrolling. Thus, an LLM will produce for you... A buggy piece of code.
LLMs are not Xerox machines. They can, in fact, produce better code than is in their training set.
That is funny for how much is wrong. Ask the LLMs to vibe code a text editor and you'll get a React app using Supabase. Engineering !== Token prediction
Non sequitur?
I have used agentic coding tools to solve problems that have literally never been solved before, and it was the AI, not me, that came up with the answer.
If you look under the hood, the multi-layered percqptratrons in the attention heads of the LLM are able to encode quite complex world models, derived from compressing its training set in a which which is formally as powerful as reasoning. These compressed model representations are accessible when prompted correctly, which express as genuinely new and innovative thoughts NOT in the training set.
> I have used agentic coding tools to solve problems that have literally never been solved before, and it was the AI, not me, that came up with the answer.
Would you show us? Genuinely asking
Unfortunately confidentiality prevents me from doing so—this was for work. I know it is something new that hasn’t been done before because we’re operating in a very niche scientific field where everyone knows everyone and one person (me, or the members of my team) can be up to speed on what everyone else is doing.
It’s happened now that a couple of times it pops out novel results. In computational chemistry, machine learned potentials trained with transformer models have already resulted in publishable new chemistry. Those papers are t out yet, but expect them within a year.
[flagged]
I'm sorry you're so sour on this. It's an amazing and powerful technology, but you have to be able to adjust your own development style to make any use of it.
Ask the LLMs to vibe code a text editor, and you'll get pretty much what you deserve in return for zero effort of your own.
Ask the best available models -- emphasis on models -- for help designing the text editor at a structural rather than functional level first, being specific about what you want and emphasizing component-level test whenever possible, and only then follow up with actual code generation, and you'll get much better results.
I think this comment exposes an important point to make: people have different opinions of what "vibe coding" even means. If I were to ask an LLM to vibe code a text editor, I guarantee you I wouldn't get a React app using Supabase -- because I'd give it pages of requirements documentation and tell it not only what I want, but the important decisions on how to make it.
Obviously no model is going to one-shot something like a full text editor, but there's an ocean of difference between defining vibe coding as prompting "Make me a text editor" versus spending days/weeks going back and forth on architecture and implementation with a model while it's implementing things bottom-up.
Both seem like common definitions of the term, but only one of them will _actually_ work here.
Do you really think so? Have you ever explored the source of something like:
https://github.com/JetBrains/intellij-community
Doesn't have to. The LLM will do it! We're done with code, aren't we?
Code is still there, but humans are done dealing with it. We're at a higher level of abstraction now. LLMs are like compilers, operating at a higher level. Nobody programs assembly language any more, much less machine language, even though the machine language is still down there in the end.
> Nobody programs assembly language
They certainly do, and I can't really follow the analogy you are building.
> We're at a higher level of abstraction now.
To me, an abstraction higher than a programming language would be natural language or some DSL that approximates it.
At the moment, I don't think most people using LLMs are reading paragraphs to maintain code. And LLMs aren't producing code in natural language.
That isn't abstraction over language, it is an abstraction over your computer use to make the code in language. If anything, you are abstracting yourself away.
Furthermore, if I am following you, you are basically saying, you have to make a call to a (free or paid) model to explain your code every time you want to alter it.
I don't know how insane that sounds to most people, but to me, it sounds bat-shit.
I've worked in 3 different WYSIWYG editors for web and desktop applications over the years, lightly contributed to a handful of other open-source editors, and spent plenty of time building my own personal editors from scratch (and am currently using gpt-5 to fix my own human bugs in a rewrite of the Notebook.ai text editor that I re-re-implemented ~8 years ago).
Editors are incredibly complex and require domain knowledge to guide agents toward the correct architecture and implementation (and away from the usual naive pitfalls), but in my experience the latest models reason about and implement features/changes just fine.