Define "not trivial". Obviously, experience helps, as with any tool. But it's hardly rocket science.
It seems to me the biggest barrier is that the person driving the tool needs to be experienced enough to recognize and assist when it runs into issues. But that's little different from any sophisticated tool.
It seems to me a lot of the criticism comes from placing completely unrealistic expectations on an LLM. "It's not perfect, therefore it sucks."
As of about three months ago, one of the most important skills in effective LLM coding is coding agent environment design.
If you want to use a tool like Claude Code (or Gemini CLI or Cursor agent mode or Code CLI or Qwen Code) to solve complex problems you need to give them an environment they can operate in where they can solve that problem without causing too much damage if something goes wrong.
You need to think about sandboxing, and what tools to expose to them, and what secrets (if any) they should have access to, and how to control the risk of prompt injection if they might be exposed to potentially malicious sources of tokens.
The other week I wanted to experiment with some optimizations of configurations on my Fly.io hosted containers. I used Claude Code for this by:
- Creating a new Fly organization which I called Scratchpad
- Assigning that a spending limit (in case my coding agent went rogue or made dumb expensive mistakes)
- Creating a Fly API token that could only manipulate that organization - so I could be sure my coding agent couldn't touch any of my production deployments
- Putting together some examples of how to use the Fly CLI tool to deploy an app with a configuration change - just enough information that Claude Code could start running its own deploys
- Running Claude Code such that it had access to the relevant Fly command authenticated with my new Scratchpad API token
With all of the above in place I could run Claude in --dangerously-skip-permissions mode and know that the absolute worse that could happen is it might burn through the spending limit I had set.
This took a while to figure out! But now... any time I want to experiment with new Fly configuration patterns I can outsource much of that work safely to Claude.
The statement I responded to was, "creating an effective workflow is not trivial".
There are plenty of useful LLM workflows that are possible to create pretty trivially.
The example you gave is not hardly the first thing a beginning LLM user would need. Yes, more sophisticated uses of an advanced tool require more experience. There's nothing different from any other tool here. You can find similar debates about programming languages.
Again, what I said in my original comment applies: people place unrealistic expectations on LLMs.
I suspect that this is at least partly is a psychological game people unconsciously play to try to minimize the competence of LLMs, to reduce the level of threat they feel. A sort of variation of terror management theory.
Yeah if I want I to develop I need tooling around me. Still trivial to learn. Not a difficult skill. Not an specific skill to llm.
Why would you need to take all of these additional sandboxing measures if you weren't using an LLM?
For one - I’d say scoped API tokens that prevent messing with resources across logical domains (eg prod vs nonprod, distinct github repos, etc) is best practice in general. Blowing up a resource with a broadly scoped token isn’t a failure mode unique to LLMs.
edit: I don’t have personal experience around spending limits but I vaguely recall them being useful for folks who want to set up AWS resources and swing for the fences, in startups without thinking too deeply about the infra. Again this isn’t a failure mode unique to LLMs although I can appreciate it not mapping perfectly to your scenario above
edit #2: fwict the LLM specific context of your scenario above is: providing examples, setting up API access somehow (eg maybe invoking a CLI?). The rest to me seems like good old software engineering
I usually work with containers for repeatability and portability. Also makes the local env closer to the final prod env.
The situation you’re outlining is trivial though.
Yea, there’s some grunt work involved but in terms of learned ability all of that is obvious to someone who knew only a little bit about LLMs.
We are going to have to disagree on this one.
I don’t really see how it’s different than how you’d setup someone really junior to have a playground of sorts.
It’s not exactly a groundbreaking line of reasoning that leads one to the conclusion of “I shouldn’t let this non-deterministic system access production servers.”
Now, setting up an LLM so that they can iterate without a human in the loop is a learned skill, but not a huge one.
I don’t think anyone expects perfection. Programs crash, drives die, and computers can break anytime. But we expect our tools to be reliable and not fight with it everyday to get it to work.
I don’t have to debug Emacs every day to write code. My CI workflow just runs every time a PR is created. When I type ‘make tests’, I get a report back. None of those things are perfect, but they are reliable.
If you work in a team, you work with other people, whose reliability is more akin to LLMs than to the deterministic processes you're describing.
What you're describing is a case of mismatched expectations.
Yep, but I don’t have to do their job for them. If they’re not reliable, at some point decisions will be taken to get them out of the project.