the perimeter of the tasks the LLMs can handle continuously expands at a pretty steady pace

a year ago they could easily one shot full stack features in my hobby next.js apps but imploded in my work codebase

as of opus 4.6 they can now one shot full features in a complex js/go data streaming & analysis tool but implode in low latency voice synthesis systems (...for now...)

just depends on how you're using it (skill issues are a thing) and what you're working on