Because usually the people who lose their jobs are people who do not adapt to the market.
Right now it's not clear in which direction everything is involving and that's why people experiment with handing all their data to random agents, figuring out how to store and access context, re-use prompts and other attempts to harness this tech. Most of these will maybe be useless in a year as they might be deeply integrated into the next wave of models but staying on top of the development has always been part of the fun of working in this field.
People are building bots to do the most legible thing possible which is feature in X amount of time. But it doesn't matter if the bottleneck is human thinking time required to output quality code rather than X amount of code written.
I am so much faster with the bots. If you're not faster with the bots then either you write very very little code, or you're doing it very wrong. Tactically they outsmart me 10-100x if you account for the write speed. Even if you just consider the knowledge of languages, libraries, patterns they clearly outperform me. Strategically I do not trust them at all, poor things suck at it, mainly because they always try to take the shortest possible path to the current destination.
And if you think that your personal protest against the automation will in any way affect the direction in which the industry goes then you're delusional. You would have to start something like a political party and collect way more people.
Wake me up when LLMs help me write better code and let me understand the codebase, and not before. Not faster, not more productive, but a more comprehensible codebase that I can reason in my own head.
Otherwise, if they write so much better code, than it's pointless to have a human in the loop.