Ai code assistants have some peculiar problems. They often fall into loops and errors of perception. They can’t reason about high level architecture well. They will often flip flop between two possible ways of doing things. It’s possible that good coding rules might help, but I expect they will have weird rabbit hole errors.

That being said they can write thousands of lines an hour and can probably do things that would be impossible for a human. (Imagine having the LLM skip code and spit out compiled binaries as one example)