> For every time that I'd get a better answer if the LLM had a bit more context on me

If you already know what a good answer is why use a LLM? If the answer is "it'll just write the same thing quicker than I would have", then why not just use it as an autocomplete feature?

That might be exactly how they're using it. A lot of my LLM use is really just having it write something I would have spent a long time typing out and making a few edits to it.

Once I get into stuff I haven't worked out how to do yet, the LLM often doesn't really know either unless I can work it out myself and explain it first.

That rubber duck is a valid workflow. Keep iterating at how you want to explain something until the LLM can echo back (and expand upon) whatever the hell you are trying to get out of your head.

Sometimes I’ll do five or six edits to a single prompt to get the LLM to echo back something that sounds right. That refinement really helps clarify my thinking.

…it’s also dangerous if you aren’t careful because you are basically trying to get the model to agree with you and go along with whatever you are saying. Gotta be careful to not let the model jerk you off too hard!

Yes, I have had times where I realised after a while that my proposed approach would never actually work because of some overlooked high-level issue, but the LLM never spots that kind of thing and just happily keeps trying.

Maybe that's a good thing - if it could think that well, what would I be contributing?

You don't need to know what the answer is ahead of time to recognize the difference between a good answer and a bad answer. Many times the answer comes back as a Python script and I'm like, oh I hate Python, rewrite that. So it's useful to have a permanent prompt that tells it things like that.

But myself as well, that prompt is very short. I don't keep a large stable of reusable prompts because I agree, every unnecessary word is a distraction that does more harm than good.

For example when I'm learning a new library or technique, I often tell Claude that I'm new and learning about it and the responses tend to be very helpful to me. For example I am currently using that to learn Qt with custom OpenGL shaders and it helps a lot that Claude knows I'm not a genius about this

Because it's convenient not having to start every question from first principles.

Why should I have to mention the city I live in when asking for a restaurant recommendation? Yes, I know a good answer is one that's in my city, and a bad answer is on one another continent.