> No, I don't.

Strange. For a simple "add two integers" you now have to do five different updates to specs to make it non-ambiguous, restarting the work from scratch (that is, starting a new context) every time.

What happens when your work isn't to add two integers? How many iterations of the spec you have to do before you arrive at an unambiguous one, and how big will it be?

> Once you figure out how to communicate,

LLMs don't communicate.

> Right, if that's what you meant, then yeah, of course they don't ignore the existing code, if there is a function that already does what it needs, it'll use that.

Of course it won't since LLMs don't learn. When you start a new context, the world doesn't exist. It literally has no idea what does and does not exist in your project.

It may search for some functionality given a spec/definition/question/brainstorming skill/thinking or planning mode. But it may just as likely not. Because there are no actual proper way for anyone to direct it, and the models don't have learning/object permanence.

> If the agent/LLM you use doesn't automatically does this, I suggest you try something better, like Codex or Claude Code.

The most infuriating thing about these conversations is that people hyping AI assume everyone else but them is stupid, or doing something incorrectly.

We are supposed to always believe people who say "LLMs just work", without any doubt, on faith alone.

However, people who do the exact same things, use the exact tools, and see all the problems for what they are? Well, they are stupid idiots with skill issues who don't know anything and probably use GPT 1.0 or something.

Neither Claude nor Codex are magic silver bullets. Claude will happily reinvent any and all functions it wants, and has been doing so since the very first day it was unleashed onto the world.

> But anyways, you don't really seem like you're looking for improving, but instead try to dismiss better techniques available

Yup. Just as I said previously.

There are some magical techniques, and if you don't use them, you're a stupid Luddite idiot.

Doesn't matter that the person talking about these magical techniques completely ignores and misses the whole point of the conversation and is fully prejudiced against you. The person who needs to improve for some vague condescending definition of improvement is you.

> LLMs don't communicate.

Similarly, some humans seem to unable to too. The problem is, you need to be good at communication to effectively use LLMs, judging by this thread, it's pretty clear what the problem is. I hope you figure it out someday, or just ignore LLMs, no one is forcing you to use them (I hope at least).

I don't mind what you do, and I'm not "hyping LLMs", I see them as tools that are sometimes applicable. But even to use them in that way, you need to understand how to use them. But again, maybe you don't want, that's fine too.

"However, people who do the exact same things, use the exact tools, and see all the problems for what they are? Well, they are stupid idiots with skill issues who don't know anything and probably use GPT 1.0 or something."

Perfectly exemplified

Yeah, a summary of some imaginary arguments someone else made (maybe?), quoted back to me that never said any of those things? Fun :)

The "imaginary arguments" in question:

- "If the agent/LLM you use doesn't automatically does this, I suggest you try something better, like Codex or Claude Code."

- "you don't really seem like you're looking for improving"

- "Hopefully at least someone who wants to improve comes across it so this whole conversation wasn't a complete waste of time"

- "judging by this thread, it's pretty clear what the problem is. I hope you figure it out someday"

- "you need to understand how to use them. But again, maybe you don't want"

Aka what I said previously.

At this point, adieu.