For a hot second I thought LLMs were coming for our jobs. Then I realized they were just as likely to end up creating mountains of things for us to fix later. And as things settle down, I find good use cases for Claude Code that augment me but are in no danger of replacing me. It certainly has its moments.
Finally, an opinion on here that’s reasonable and isn’t “AI is perfect” or “AI is useless”.
One of the things that has struck me as odd is just how little self-awareness devs have when talking about "skin in the game" with regard to CEO's hawking AI products.
Like, we have just as much to lose as they have to gain. Of course a part of us doesn't want these tools to be as good as some people say they are because it directly affects our future and livelihood.
No, they can't do everything. Yes, they can do some things. It's that simple.
Indeed. You’d think people with a tech background would be less likely to be so irrationally emotional about these things, but obviously not. Fear and devotion it turns out are both incredibly blinding motivators.
> doesn’t want these tools to be as good as some people say they are
No, this is because that would mean AGI. And it’s obviously not that.
And they’re similarly not as useless as others say they are, as they draw crosses in the air and hiss at the heathen tech to go away.