Typical coding LLM issues:

Hallucinations

Context limits

Lack of test coverage and testing-based workflow

Lack of actual docs

Lack of a spec

Great README; cool emoji

    Lack of actual docs
    Lack of a spec
Well, not my LLMs at least

Sooo the LLM codes just like me ?

No; it doesn't care when it gives you incomplete garbage.

You have to tell it to validate its own work by adding to, refactoring, and running the tests before it replies.

Most junior developers do care and would never dump partial solutions on a prompter as though they're sufficient like LLMs.

Every time I remember to get `make test-coverage` working and have myself or the LLM focus on lines that aren't covered by tests.

Junior or Senior, an employee wouldn't turn in such incomplete, not compiling assignments that % of the time; even given inadequate prompts as specifications.

If you’re hiring someone remotely without any trust you could absolutely get random garbage that pretend to be real work from a human.

A human software developer doesn't code in the void, he interacts with others.

The same when you have an AI coder, you interact with it. It's not fire and forget.

well that's enough for "good-looking documentation-is-everything" kinda teams

I'd take tests over docs but that's a false dilemma.

What does the (Copilot) /tests command do, compared to a prompt like "Generate tests for #symbolname, run them, and modify the FUT function under test and run the tests in a loop until the tests pass"?

Documentation is probably key to the Django web framework's success, for example.

Resources useful for learning to write great docs: https://news.ycombinator.com/item?id=23945815

"Ask HN: Tools to generate coverage of user documentation for code" https://news.ycombinator.com/item?id=30758645

Context limits (regardless of hard limits) are a show stopper IMO, the models completely fail assignments with >= 30k LoC (or so) codebases.

You're better off feeding them a few files to work with, in isolation, if you can.