Even if you said AI programming is based on "knowing what to prompt" this still comes down to:

(1) understanding software engineering (for one thing knowing if answers make sense)

(2) subject matter expertise and the ability to communicate with SMEs, fake being an SME by reading books, see the old "knowledge engineer" construct from the 1980s.

(3) knowing specifics about AI coding.

I think (1) and (2) are 80-90% of what leads to success in the long term. My guess is the models are going to get better so (3) skills have a short half life and will matter less, but (1) and (2) will stay the same.

Maybe I'm cynical but if I was designing screeners for this thing I would ask people things like

"How many accounts do you follow on X about AI?" where the right answer is "I don't have an X account" and the higher the count the worse it is.

"What percent of your programming time do you spend thinking about AI programming tools?" and anything over 20% is suspect (but maybe it is a tooling job or something in which case I'd drop it)

That is, I want to see that somebody used AI tools to deliver something 100% done end-to-end that worked and I'd like to see them spending 80% of their time doing.

I'd also be thinking about screeners designed to detect FOMO attitudes and reject people for it.

What is an example of that 100% end to end that worked?

Could be as simple as “completed some tickets with quality code that I understood and that passed rigorous review and didn’t add technical debt”

The trouble w/ AI slop is that the people who make it don’t know it is AI slop, not that it was AI generated.

So if I use only Claude Code for everything I do, what would that mean to you?

I’d want to look at the output.