Non sequitur?
I have used agentic coding tools to solve problems that have literally never been solved before, and it was the AI, not me, that came up with the answer.
If you look under the hood, the multi-layered percqptratrons in the attention heads of the LLM are able to encode quite complex world models, derived from compressing its training set in a which which is formally as powerful as reasoning. These compressed model representations are accessible when prompted correctly, which express as genuinely new and innovative thoughts NOT in the training set.
> I have used agentic coding tools to solve problems that have literally never been solved before, and it was the AI, not me, that came up with the answer.
Would you show us? Genuinely asking
Unfortunately confidentiality prevents me from doing so—this was for work. I know it is something new that hasn’t been done before because we’re operating in a very niche scientific field where everyone knows everyone and one person (me, or the members of my team) can be up to speed on what everyone else is doing.
It’s happened now that a couple of times it pops out novel results. In computational chemistry, machine learned potentials trained with transformer models have already resulted in publishable new chemistry. Those papers are t out yet, but expect them within a year.
[flagged]
I'm sorry you're so sour on this. It's an amazing and powerful technology, but you have to be able to adjust your own development style to make any use of it.