Author here. Yes, I think the original GitHub Copilot autocomplete UI is (ironically) a good example of a HUD! Tab autocomplete just becomes part of your mental flow.
Recent coding interfaces are all trending towards chat agents though.
It’s interesting to consider what a “tab autocomplete” UI for coding might look like at a higher level of abstraction, letting you mold code in a direct-feeling way without being bogged down in details.
If that's what you think a HUD is, then a HUD is definitely way, way worse. Rather than a copilot sitting next to you, that's someone grabbing your hands and doing things with them while you're at the controls.
But if I invoke the death of the author and pretend HUD meant HUD, then it's a good point: tools are things you can form a cybernetic system with, classic examples being things like hand tools or cars, and you can't form a cybernetic system with something trying to be an "agent". To be in a cybernetic system with something you need predictable control and fast feedback, roughly.
I take "HUD" here to just mean "in your line of vision" or "in the context of your actual task" or minimizing any context switch to another interaction (chat window).
Rather I think most implementations of HUD AI interactions so far have been quite poor because the interaction model itself is perhaps immature and no one has quite hit the sweet spot yet (that I know of). Tab autocompletion is a simple gesture, but trades off too much control for more complex scenarios and is too easy to accidentally activate. Inline chat is still a context switch and also not quite right.