Humans are fallible in a way computers are not. One could argue any creative process is an exercise in fallibility.
More interestingly, humans are capable of assessing the results of their "neural misfires" ("hmm, there's something to this"), whereas even if we could make a computer do such mistakes, it wouldn't know its Penny Lane from its Daddy's Car[0], even if it managed to come up with one.
Hang on, hasn't everyone spent the past few years complaining about LLMs and diffusion models being very fallible?
And we can get LLMs to do better by just prompting them to "think step by step" or replacing the first ten attempts to output a "stop" symbolic token with the token for "Wait… "?