> the leverage you get out of it is exponentially proportional to the quality of your instructions, the structure of your interactions, and the amount of attention you pay to the outputs
Couldn't say it better myself. I think many people get discouraged when they don't get good results without realizing that for good results you need to learn how to interact with these AI agents, it's a skill that you can improve by using them a lot. Also some AI tools are just better than others for certain use cases, you need to find one that works best with what you're doing.
When it finally clicks for you and you realize how much value you can extract from these tools there's literally no coming back.
It's not about learning how to interact with AI agents. The only required skills for working with these tools are basic reading and writing skills any decent English speaker would have. Knowing how and when to provide additional context and breaking down problems into incremental steps are common workflows within teams, not something novel or unique to LLMs.
"Prompt" or "context engineering" is what grifters claim they can teach for a fee.
What does make a difference is what has been obvious since the advent of LLMs: domain experts get the most out of them. LLMs can be coaxed into generating almost any thinkable output, as long as they're prompted for it. Only experts will know precisely what to ask for, what not to ask for, and whether or not the output aligns with their expectations. Everyone else is winging it, and their results will always be of inferior quality, until and if these tools improve significantly.
What's dubious to me is whether experts really gain much from using LLMs. They're already good at their job. How valuable is it to use a tool that can automate the mechanical parts of what they do, while leaving them with larger tasks like ensuring that the output is actually correct? In the context of programming, it's like pairing up with a junior developer in the driver seat who can type really quickly, but will confidently make mistakes or will blindly agree with anything you say. At a certain point it becomes less frustrating and even faster to type at normal human speeds using boring old tools yourself.
> It's not about learning how to interact with AI agents. The only required skills for working with these tools are basic reading and writing skills any decent English speaker would have.
This is flatly untrue, just as the same would be untrue about getting the most out of people (but the behavioral quirks of AI systems and the ways to deal with them do not follow human psychology, so while it is inaccurate in the same way as with people, the skills needed are almost entirely unrelated.)