ChatGPT has been nerfed in several ways, at least one of them is a ham-fisted attempt to save on inference costs.
I pasted a C enum into it and asked for it to be translated into Julia, which is the sort of boring but useful task that LLMs are well-suited for. It produced the first ~seven values of the enum and added a comment "the rest of the enum goes here".
I cajoled it into finishing the job, and spent some time on a custom prompt which has mostly prevented this kind of laziness. Rather annoyed that I had to do so in the first place though.