A casualty of how underbaked data labelling and training are/were. The blindspots are glaring when you're looking for them, but the decreased overhead of training LoRA now means we can locally supplement a good base model on commodity hardware in a matter of hours.

Also, there's a lot of "samehand" and hand hiding in BFL and other models. Part of the reason I don't use any MaaS is how hard they were focusing on manufacturing superficial impressions over increasing fundamental understanding and direction following. Kontext is a nice deviation, but it was already achievable through captioning and model merges.