I'm fatigued by these articles that just broadly claim AI can't code because its painting a broad stroke against a widely diverse use of AI for different stacks.
It's horribly outdated way of thinking that an singular AI entity would be able to handle all stacks all problems directed at it because no developer is using it that way.
AI is a great tool for both coders and artists and these outlandish titles that grab attention really seem to be echo chambers aimed at people who are convinced that AI isn't going to replace them which is true but the opposite is also true.
A lot of comments here seem to be similar. I see people claiming that AI has all but taken over doing their work for them, and others claiming that it's almost useless. But usually, nobody even briefly mentions what the work is (other than, presumably, something related to programming).
I imagine there's a big difference in using AI for building, say, an online forum vs. building a flight control system, both in terms of what the AI theoretically can do, and in terms of what we maybe should or should not be letting the AI do.
Yeah. I use it for analytics/dataviz stuff (which involves a lot of python to run spark jobs, glue different APis to get some extra column of data, making png or svg pictures,, making D3 based web sites in html and JavaScript). That all works pretty well.
I also write high performance Go server code, where it works a lot less well. It doesn't follow my rules for ptr APIs or using sync mutexes or atomic operations across a code base. It (probably slightly older version than SOTA) didn't read deep call chains accurately for refactoring. It's still worth trying but if that was my sole work it would probably not be worth it.
On the other hand for personal productivity, emacs functions and config, getting a good .mcp.json, it is also very good and generates code that partakes in the exponential growth of good code. (Unlike data viz where there is a tendency to build something and then the utility declines over time).
I can confidently state that for CRUD web apps, its truly over as in those jobs are never going to command the same wages it once used to.
With the recent models its now encroaching similarly on all fronts, I think the next few iterations we'll see LLM solidify itself as a meta-compiler that will be deployed locally for more FCS type systems.
At the end of the day the hazards are still same with or without AI, you need checks and bounds, you need proper vetting of code and quality but overall it probably doesn't make sense to charge an hourly rate because an AI would drastically cut down such billing schemes.
For me "replacement" is largely a 70~80% reduction in either hourly wages, job positions or both and from the job market data I see it can get there.
"crud web apps" sounds like WordPress or Django anyways I mean, it's already kind of valueless ? The true value lies in what this crud app is about, marketing, and the extra bits you can add to make it special.
Well, AI really can't code any more than a compiler can. They all require a human to write the original code, even the machine does translate it into other code.
And until the day that humans are no longer driving the bus that will remain the case.
You can say generate a c program that uses gcc 128 bit floats and systematically generates all quadratic roots in order of the max size of their minimal polynomial coefficients, and then sort them and calculate the distribution of the intervals between adjacent numbers, and it just does it. That's qualitatively different from the compilers I have used. Now I was careful to use properly technical words to pull in the world of numeric computation and c programming. But still saved me a lot of time. It was even able to bolt on multithreaded parallelism to speed it up using c stuff I never heard of.
> That's qualitatively different from the compilers I have used.
Is it? I can, in most traditional programming languages commonly used to today, using decades old compiler technology, say something like "x = [1,2,3]" and it will, for example, systematically generate all the code necessary to allocate memory without any need for me to be any more explicit about it. It would be fair to say AI offers an even higher level abstraction, like how most programming languages used today are a higher level abstraction over assembly, but fundamentally different it is not.
"generate a c program that uses gcc 128 bit floats and systematically generates all quadratic roots in order of the max size of their minimal polynomial coefficients, and then sort them and calculate the distribution of the intervals between adjacent numbers" is just code. You still have to write the code get AI to translate it into a lower-level abstraction. It doesn't magically go off and do its own autonomous thing.