I’m still waiting that someone claiming how prompting is such an skill to learn, explain just once a single technique that is not obvious, like: storing checkpoint to go back to working version (already a good practice without using Llm see:git) or launch 10 tabs with slightly different prompts and choose the best, or ask the Llm to improve my prompt, or adding more context … is that an skill? I remember when I was a child that my mom thought that programming a vcr to record the night show to be such a feat…
In my experience, it's not just prompting that needs to be figured out, it's a whole new workstyle that works for you, your technologies and even your current project. As an example, I write almost all my code functional-programming style, which I rarely did before. This lets me keep my prompts and context very focused and it essentially elminates hallucinations.
Also I started in the pre-agents era and so I ended up with a pair-programming paradigm. Now everytime I conceptualize a new task in my head -- whether it is a few lines of data wrangling within a function, or generating an entire feature complete with integration tests -- I instinctively do a quick prompt-vs-manual coding evaluation and seamlessly jump to AI code generation if the prompt "feels" more promising in terms of total time and probability of correctness.
I think one of the skills is learning this kind of continuous evaluation and the judgement that goes with it.
See my comment here about designing environments for coding agents to operate in: https://news.ycombinator.com/item?id=44854680
Effective LLM usage these days is about a lot more than just the prompts.
You may not consider it a skill, but I train multiple programming agents on different production and quality code bases, and have all of them pr review a change, with a report given at the end.
it helps dramatically on finding bugs and issues. perhaps that's trivial to you, but it feels novel as we've only had effective agents in the last couple weeks.
But give an example? What did you do that you consider a difficult skill to learn?
Usually when you learn difficult skills, you can go to a trainer, take a class, read about the solutions.
Right now, you are entirely up to the random flawed information on the internet that you often can't repeat in trials, or your structured ideas on how to improve a thing.
That is difficult. It is difficult to take the information available right now, and come up with a reasonable way to improve the performance of LLMs through your ingenuity.
At some point it will be figured out, and every corporation will be following the same ideal setup, but at the moment it is a green field opportunity for the human brain to come up with novel and interesting ideas.
Thanks. So the skill is figuring out heuristics? That is not even related with AI or LLM. But as I said is like learning how to google, which is exactly that, try and error until you figure out what Google prefers
I mean, it's definitely related. We have this tool that we know can perform better with software with it. Building that software is challenging. Knowing what to build, testing it.
I believe that's difficult, and not just what google prefers. I guess we feel differently about it.