It doesn't have to be perfect, or even good, to 'work': it just has to perform the expected function well enough to satisfy their use case, which is low-effort text generation without significant quality requirements. Therefore, it absolutely works.
It doesn't have to be perfect, or even good, to 'work': it just has to perform the expected function well enough to satisfy their use case, which is low-effort text generation without significant quality requirements. Therefore, it absolutely works.
In that case, nobody ever argued that LLMs aren't capable of generating low effort, low quality text so I don't understand the point of the earlier comment. We don't have to "accept" this as it was never questioned.
But what does it matter? After the game of semantics is all said and done, the work is still being done to a lower standard than before, and people are letting their skills atrophy.
The comment I responded to said the article didn’t acknowledge that AI “might work.” I said the premise of the article was based on the assumption that to some extent, AI worked. You said AI didn’t work because its output is low quality, which is not something either me or the original commenter said anything about at all. I said that objective quality didn’t factor into the equation because if it satisfied people’s use cases, by their standards, it “worked.” Then you replied saying you never claimed that AI didn’t work for low-quality outputs. Aaannd here we are.