These personal blogs are starting to feel like Linkdin Lunatic posts, kinda similar. to the optimised floor sweeping blog, “I am excited to provide shareholder value, at minimum wage”
These personal blogs are starting to feel like Linkdin Lunatic posts, kinda similar. to the optimised floor sweeping blog, “I am excited to provide shareholder value, at minimum wage”
What does it tell you that programmers with the credibility of antirez - and who do not have an AI product to sell you - are writing things like this even when they know a lot of people aren't going to like reading them?
What it tells me is that humans are fallible, and that being a competent programmer has no correlation with having strong mental defenses against the brainrot that typifies the modern terminally-online internet user.
I leverage LLMs where it makes sense for me to do so, but let's dispense with this FOMO silliness. People who choose not to aren't missing out on anything, any more than people who choose to use stock Vim rather than VSCode aren't missing out on anything.
It's not Vim vs VSCode though - the analogy might be writing in assembler vs writing in your high level language of choice.
Using AI you're increasing the level of abstraction you can work at, and reducing the amount of detail you have to worry about. You tell the AI what you want to do, not how to do it, other than providing context that does tell it about the things that you actually care about (as much or little as you choose, but generally the more the better to achieve a specific outcome).
> the analogy might be writing in assembler vs writing in your high level language of choice.
If it were deterministic, yes, but it's not. When I write in a high level language, I never have to check the compiled code, so this comparison makes no sense.
If we see new kinds of languages, or compile targets, that would be different.
Just because he doesn't have an AI product to sell doesn't mean he doesn't have a bias. For all we know, he's heavily invested in AI companies.
We have to abandon the appeal to authority and take the argument on its merits, which honestly, we should be doing regardless.
> We have to abandon the appeal to authority and take the argument on its merits, which honestly, we should be doing regardless.
I don't really agree. In virtually any field, when those who have achieved mastery speak, others, even other masters, tend to listen. That does not mean blindly trust them. It means adjust your priors and reevaluate your beliefs.
Software development is not special. When people like antirez (redis) and simonw (django) and DHH (rails) are speaking highly of AI, and when Linus Torvalds is saying he's using AI now, suggesting they may be on to something is not an appeal to authority. And frankly, claiming that they might be saying nice things about AI because of some financial motive is crazy.
People higher up the ladder aren't selling anything but they also have to not worry about losing jobs. We are worried that execs are going to see the advances and quickly clear the benches, might not be true but every programmer believing they have become a 10x programmer pushes us more into that reality.
That is an argument to authority. There is a large enough segment of folks who like to be confirmed in either direction. Doesn't make the argument itself correct or incorrect. Time will tell though.
Nothing at all, it just sounds like a desperate post on LinkedIn riding the slight glimmer of hope it will help them land their next position.
Being famous doesn't mean that they're right about everything, e.g. Einstein and "God does not play dice with the universe".
That LLMs advocates are resorting to the appeal to authority fallacy isn't a good look for them either.
[dead]