> FYI: The author has predicted that "AGI" will be here in 1-2 years and has staked his public reputation on it. He is personally invested in trendlines being lindy rather than sigmoid.

I mean, that's called "having an opinion".

He co-authored a report, which is something more than an opinion. It may be used to inspire policy. There should be greater reputational consequences for publishing something you spent a few months studying and writing about along with several experts. Just my opinion.

I don't understand what you're trying to imply here. Yes, he co-authored a report. What is supposed to be dangerous or suspicious about this? What does your statement about "reputational consequences" have to do with your original comment, which implies that this some indicates a bias on his part?

It seems to me like you're trying to somehow imply that writing things to convince people of what you believe is somehow nefarious? It isn't! It's what we're all doing here right now! Putting it in a format that certain people will take more seriously doesn't make it nefarious either. I am quite confused by your point of view here.

There was no implication of anything you're suggesting. It's a question of correctness (bias vs facts, predicting the sun will rise vs predicting the end of the world), whether you think it's important to be correct as a matter of reputation, and how correctness should be weighed if it is indeed important to one's reputation (a once-off comment vs a full report).

Not interested in further arguments about this.

And now he's publishing more information about that same opinion he still has. How horrible.

He wrote articles arguing that pro-AI people are dismissive of risks or even suggesting they are intellectually lazy. He's taken a side. if he's wrong I would hope he owns up to it

> He's taken a side.

Yes, that's called "having an opinion". Typically people writing argumentative pieces are doing so because they have a belief about the matter. I'm not sure what exactly you expect here.

> if he's wrong I would hope he owns up to it

I think Scott Alexander is pretty good about that.

> He wrote articles arguing that pro-AI people are dismissive of risks or even suggesting they are intellectually lazy

I mean.. this is 2026 right? You're not writing that comment from 2024 or something?

We see massive problems already where photos are just not believable anymore, nor is audio, and not even video actually with many people falling for AI fake image clips from the Gaza war for example. And since then these tools are MASSIVELY more powerful. Disinformation is essentially free, and the cost of truth has been static. Meaning the "buying power" of truth has collapsed and is falling faster and faster.

Anyone who dismissed AI risks a few years ago IS ALREADY PROVEN WRONG.