A refusal to even acknowledge that AI might work isn't a very sensible refutation of the risks we're going to face.

> A refusal to even acknowledge that AI might work isn't a very sensible refutation of the risks we're going to face.

That's probably why they're not doing that. The core premise-- we will rely on AI so much that we will de-skill ourselves-- requires acknowledging that AI works.

> The core premise-- we will rely on AI so much that we will de-skill ourselves-- requires acknowledging that AI works.

No it doesn't require that because the vast majority of people aren't rational actors and they don't optimize for the quality of their work - they optimize for their own comfort and emotional experience.

They'll happily produce and defend low quality work if they get to avoid the discomfort of having to engage in cognitively strenuous work, in the same way people rationalize every other choice they make that's bad for them, the society, the environment, and anyone else.

> No it doesn't require that because the vast majority of people aren't rational actors and they don't optimize for the quality of their work - they optimize for their own comfort and emotional experience.

People are rational, and the example you give actually shows that; they prefer reduced workload, so they optimize for their own comfort and emotional experience. What isn't rational about that?

> What isn't rational about that?

If people made decisions the way you described, by carefully considering and accepting trade-offs then I would agree that they are rational actors.

But people don't do that, they pick an outcome they prefer and try to rationalize it afterwards by claiming that trade-offs don't exist.

It doesn't have to be perfect, or even good, to 'work': it just has to perform the expected function well enough to satisfy their use case, which is low-effort text generation without significant quality requirements. Therefore, it absolutely works.

In that case, nobody ever argued that LLMs aren't capable of generating low effort, low quality text so I don't understand the point of the earlier comment. We don't have to "accept" this as it was never questioned.

But what does it matter? After the game of semantics is all said and done, the work is still being done to a lower standard than before, and people are letting their skills atrophy.

The comment I responded to said the article didn’t acknowledge that AI “might work.” I said the premise of the article was based on the assumption that to some extent, AI worked. You said AI didn’t work because its output is low quality, which is not something either me or the original commenter said anything about at all. I said that objective quality didn’t factor into the equation because if it satisfied people’s use cases, by their standards, it “worked.” Then you replied saying you never claimed that AI didn’t work for low-quality outputs. Aaannd here we are.

The intro to the article explicitly states that it will put aside any fanciful ideas that the technology might be better in the future; that "rather than try to forecast the future as it might turn out, I’d prefer to describe reality as it already exists".

The article acknowledges that AI progress to date has worked. It snidely disagrees without argument that AI the field could work from here on out.

saying it won’t improve isn’t saying that it won’t work. My car isn’t going to improve in the future, but my car will work in the future. They’re just two different things.

[deleted]