The irony is, AI coding only works after and if you put a lot of work on engineering, like creating a factory.

There is a lot of work that goes on before even reaching the point to write code.

For example, being able to vibecode a UI wireframe instead of being blocked for 2 sprints by your UI/UX team or templating an alpha to gauge customer interest in 1 week instead of 1 quarter is a massive operational improvement.

Of course these aren't completed products, but customers in most cases can accept such performance in the short-to-medium term or if it is part of an alpha.

This is why I keep repeating ad nauseum that most decisionmakers don't expect AI to replace jobs. The reality is, professional software engineering is about translating business requirements into tangible products.

It's not the codebase that matters in most cases - it's the requirements and outcomes that do. Like you can refactor and prettify your codebase all you want, but if it isn't directly driving customer revenue or value, then that time could be better spent elsewhere. It's the usecase that your product enables which is why they are purchasing your product.

  > The reality is, professional software engineering is about translating business requirements into tangible products.
and most requirements (ime anyways) are usually barely half-baked and incomplete causing re-testing and re-work over and over which are the real bottlenecks...

ai/vibe coding may make that cycle faster but idk it might actually make things worse long-term because now the race course has rubber walls and there is less penalty just bouncing left and right instead of smoothly speeding down the course to the next destination...

> most requirements (ime anyways) are usually barely half-baked and incomplete causing re-testing and re-work over and over which are the real bottlenecks...

> ai/vibe coding may make that cycle faster but idk it might actually make things worse long-term

By making the cycle faster it reduces the impact while also highlighting issues within the process - there are too many incompetent PMs and SWEs.

Additionally, in a lot of cases a PM won't tell you that you might actually be working on checkbox work that someone needs to do but doesn't justify an entire group of 2-3 SWEs.

A good reference for this is how close is the feature you are working on directly aligned with revenue generation - if your feature cannot be directly monetized as it's own SKU or as a part of a bundle, you are working on a cost center.

The reality is that perfection is the enemy of good, and this requires both Engineers and PMs working together to negotiate on requirements.

If this does not happen at your workplace, you are either working on a cost center feature that doesn't matter or you are working at a bad employer. Either way it is best for you career to leave.

In my experience, if you've actually chatted with executive leadership teams in most F500s, when they are thinking about "AI Safety" they are actually thinking about standard cybersecurity guardrails like zero-trust, identity, authn/z, and API security with an added layer of SLAs around deterministic output.

But by being able to constantly interate and experiment, companies can release features and products faster with better margins - getting a V1 out the door in 1 sprint and spending the rest of the quarter adding guardrails is significantly cheaper than spending 1 quarter building V2 and then spending 1 more quarter building the same guardrails anyhow.

Basically, we're returning to the same norms in the software industry that we had pre-COVID around building for pragmatism instead of for perfection. I saw a severe degradation in the quality of SWEs during and after COVID (too many code monkeys, not enough engineers/architects).

As a researcher in formal methods, I totally get you