Not using Hot Aisle for inference?

We're literally full. Just a few 1x GPUs available right now.

So far, I haven't been happy with any of the smaller coding models, they just don't compare to claude/codex.