Not who you asked, but I don't like the effect they have on people. People develop dependence on them at the cost of their own skills. I have two problems with that. A lot of their outputs are factually incorrect, but confidently stated. They project an air of trustworthiness seemingly more effectively than a used care salesman. My other problem is farther-looking. Once everyone is sufficiently hooked, and the enshittification begins, whoever is pulling the strings on these models will be able to silently direct public sentiment from under cover. People are increasingly outsourcing their own decisions to these machines.
exactly. People are blindly dumping everything into LLMs. A few years into the future, will we have Sr or Staff enggs who can fix things themselves? What happens when claude has an outage and there is a prod issue?!
Why? What don't you like about them?
Not who you asked, but I don't like the effect they have on people. People develop dependence on them at the cost of their own skills. I have two problems with that. A lot of their outputs are factually incorrect, but confidently stated. They project an air of trustworthiness seemingly more effectively than a used care salesman. My other problem is farther-looking. Once everyone is sufficiently hooked, and the enshittification begins, whoever is pulling the strings on these models will be able to silently direct public sentiment from under cover. People are increasingly outsourcing their own decisions to these machines.
exactly. People are blindly dumping everything into LLMs. A few years into the future, will we have Sr or Staff enggs who can fix things themselves? What happens when claude has an outage and there is a prod issue?!
PRs these days are all AI slop.
there is little chance of that, especially with people running them locally