My manager recently told our team that "AI usage" would be added to our engineering competencies, and we would all be expected to "use AI more."

When I said my top preference for AI usage, by far, would be to eliminate human code reviews, the response was basically, "Oh, not like that."

That's a bummer! At my company we've started investing in what I'm calling 'semantic linting', which is basically running GPT over a PR with a set of rules that we iterate on. Already I'm finding huge value for style/pattern comments that linters can't easily catch, dropping warnings for common DB migration footguns, or notifying people of changing patterns/new ways of doing things. Been great so far!

Do you have any write-up about this or more info? It sounds like a useful use case but I haven't yet got it right

I do not, but it took me and a coworker all of an hour to setup. Create a CI workflow, vibe code it to load up the PR contents + any rules you want (we maintain a directory of individual rules that people can tweak), and ship it off to GPT for a response. Add some input/output schemas, and it's pluggable back into the CI hooks to notify of failed build steps (Github in my case).

Don't worry, if you actually increase AI usage your team will be forced to automate code review, either explicitly or implicitly.

Rn 70% of ai is being used only for code review documentation purpose, i think we really need entire engg workflows to be ai automated for better team insights, better team performace , sprint assesment etc and track entire engg lifecycle.

Vision should not be AI code, but it should be AI beyond code.