If you research how something like Cursor works I don't think you would believe it is inevitable. The jump that would have to happen for it to replace engineers entirely is insurmountable. They can keep expanding contexts and coming up with clever ways to augment generation but I don't see it ever actually having full vision on the system, product and users.
Beyond that it is incredibly biased towards existing code & prompt content. If you wanted to build a voice chat app, and you said "should I use websockets or http?" It would say Websockets. It won't override you and say "Use neither, you should use webRTC", but an experienced engineer would spot that the prompt itself is flawed instantly. LLMs just will bias towards existing tokens in the prompt and won't surface data that would challenge the question itself.
Unless you, well, state in AGENTS.md that prompts may offer suboptimal options in which case it's the machine's duty to question them, treat the prompter like a coworker and not a boss.
Sit down and re-read your comment one night with your "I am an engineer and will solve this as an engineering problem" hat firmly on. If you stop thinking of LLMs as lobotimized coworkers trapped inside an API wrapper and instead as computational primitives then things become much more interesting and the future becomes clearer to see.