> No, that doesn’t make you a skeptic in this context.

That's good to hear, but I have been called an AI skeptic a lot on hn, so not everyone agrees with you!

I agree though, there's a certain class of "AI denialism" which pretends that LLMs don't do anything useful, which in almost-2026 is pretty hard to argue.

On the other hand, ever since LLMs came on the scene, there’s been a vocal group claiming that AI will become intelligent and rapidly bring about human extinction - think the r/singularity crowd. This seems just as untenable a position to hold at this point. It’s becoming clear that these things are simply tools. Useful in many cases, but that’s it.

The AI doomers have actually been around long before LLMs. Discussion about AI doom has been popular in the rationalist communities for a very long time. Look up “Roko’s Basilisk” for a history of one of these concepts from 15 years ago that has been pervasive since then.

It has been entertaining to see how Yudkowsky and the rationalist community spent over a decade building around these AI doom arguments, then they squandered their moment in the spotlight by making crazy demands about halting all AI development and bombing data centers.

> This seems just as untenable a position to hold at this point

To say that any prediction about the future shape of a technology is 'untenable' is pretty silly. Unless you've popped back in a time machine to post this.

Lots of money to be made and power to be grabed on this safety and alignment moat.

> That's good to hear, but I have been called an AI skeptic a lot on hn, so not everyone agrees with you!

The context was the article quoted, not HN comments.

I’ve been called all sorts of things on HN and been accused of everything from being a bot to a corporate shill here. You can find people applying labels and throwing around accusations in every thread here. It doesn’t mean much after a while.