>Should you gamble your career? [...] Consider the case where [you let your skills stagnate and AI falls flat].
Sure. But the converse is true as well: consider the case where you don't learn the AI tooling and AI does improve apace.
That is also gambling your career. Are you ready for pointed questions being asked about why you spent 2 days working on something that AI can do in 15 minutes, so be prepared with some answers for that.
"Learn AI tooling"
What is there to learn, honestly? People act like it's learning to write a Linux driver.
The maximum knowledge you need how to write a plan or text file. Maybe throw in a "Plz no mistakes"
There's no specific model, a better one comes out every month, everything is stochastic.
>What is there to learn, honestly?
With all due respect, that answer shows that you don't know enough about agentic coding to form an opinion on this.
Things to learn:
Those are a few off the top of my head. "Plz no mistakes" is not even a thing.I can bet that a single standard instance of existing tool like codex and Claude Code to do whatever someone with a convoluted setup like that can. It could be marginally slower if you but it's all literally just English language text files.
I use codex almost everyday, none of that is necessary unless you're trying to flatten up your resume.
It's micro services all over again, a concept useful for some very select organisations, that should've been used carefully turned into a fad every engineer had to try shoe horn into their stack.
>I can bet that a single standard instance of existing tool like codex and Claude Code
This is a perfect example of what I'm saying. You'd bet that, because you don't have enough experience with the tooling to know when you need more than a "standard instance of existing tool"
Here's a real-world case: Take some 20 year old Python code and have it convert "% format" strings to "f strings". Give that problem to a generic Claude Code setup and it will produce some subtle bugs. Now set up a "skill" that understands how to set up and test the resulting f-string against the %-format, and it will be able to detect and correct errors it introduces, automatically. And it can do that without inflating the main context.
Many of those items I mention are at their core about managing context. If you are finding Claude Code ends up "off in the weeds", this can often be due to you not managing the context windows correctly.
Just knowing when to clear context, when to compact context, and when to preserve context is a core component of successfully using the AI tooling.
I agree completely.
To me it seems you and many others are lost in the weeds of constantly evolving tooling and strategies.
A pretty basic Claude Code or Codex setup and being mindful of context handling goes a long way. Definitely long enough to be able to use AI productively while not spending much time on configuring the setup.
Staying on top of all details is not necessary but in fact counter productive, trust me.
I don't need to trust you, I've done my own testing and using newer tooling features is dramatically better than not. One of the things about the AI tooling is that it's very inexpensive to run experiments (this week I've had it build a particular tool in Python, Go, Rust, and Zig, for example).
Using skills, multiple models, MCPs and agent teams is significantly improving the results I'm seeing in real world problems.
You haven't really given me any reason why I should trust you, but I'll tell you it's going to be hard for me to trust advice that contradicts my test results.