I sent the entire series by Aphyr [1] to some friends. Two of them, independently, responded with a variant of, "TLDR, can you give a summary?"

I chat with these friends a lot but I rarely send articles that I suggest they read and that I think are profound, so I expected them to read it. These are smart people that have a history of reading lots of books.

They are both huge AI proponents now and use AI for nearly everything now. Debates on various topics with them used to be rich; now, they're shallow and they just send me AI summaries of points they're clearly just predisposed to. Their attention spans are dwindling.

[1] https://aphyr.com/data/posts/411/the-future-of-everything-is...

It's disheartening to see how shallow the engagement of some people I formerly respected has become. People I looked up to and learned from now just left ChatGPT do their thinking for them, asking for summaries of articles and topics, engaging for a minute or two at most before moving on to the next thing.

Recently, I have been taking intentional steps to avoid falling into the same tar pit. I've started corresponding over email with some of my friends, with us sending multi-page letters back and forth instead of just using chat apps. So far, it has been a wonderful breath of fresh air. Long form communication requires thought and time instead of superficial engagement, and we have had some incredibly interesting discussions that simply aren't possible over voice chat or instant messaging.

maybe it means they were never really as smart as you thought?

Not meant to be snarky. It's been two decades now since my first wide-eyed entry into the workforce, moving for new opportunities, meeting new people. it's been great. There's a lot of smart people out there. I also realize that many people I seen as smart had more access to more content then i did. i still appreciated their sharing , it was enlightening to me. But after 20 years, I think back and it's literally quoting things from smart youtube videos. and regurgitating the latest thought leaders.

We all do this, but like you, what's meaningful to me is the chewing, the dissection and synthesis. coming together to share different perspectives and so on. i've had those friends too! it's just not 1:1

You might be right but they used to read much more and our arguments used to be deeper. The changes I'm seeing in them are highly correlated to their increased use of AI.

Maybe it's something like that AI allows them to indulge in their shallowness/laziness by giving them the impression that they're not doing that.

That's interesting; have you talked to your friends about their changes in behaviour? Is it something they've noticed themselves?

Friends don’t send friends AI summaries

Coworkers, too.

Or maybe they just don't want to read a long form analysis on something?

I also enjoy the series. But sometimes my friends send me things and I'm like, "not gonna read all of that."

Just because you're friends don't want to invest the same amount of time that you want to invest in your own personal enrichment doesn't mean they're getting stupid.

MIT actually has a paper on how ChatGPT use impacted cognitive skills for essay writing.

> Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

> https://arxiv.org/abs/2506.08872

> Cognitive activity scaled down in relation to external tool use. …

> Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.

You might just overhype this blog.

I read one of his last week? and didn't like it that much. I read this one despite it because its quite high on hn for whatever reason.

I don't think everything is lies and i don't like how he thinks a LLM is just some bullshit machine.

Its also waaaay to early to even understand were this is going. We as humans have never had that much compute and used it this particular way. It could literlay be the road to a utopia or dystopia. But its very crazy to experience it.

His article series feels so negative and dismissive, that i'm not taking anything from it.

There is so much more research, money and compute behind this AI topic right now, every week or two weeks something relevant better/new comes out of this. From 2d, 3d models, new LLM versions, smaller LLms, faster inferencing (Nvidias Nemotron), we don't know how this will continue.

And the weird thing is that he clearly knows plenty about LLMs but it feels so negative dismissive, hard to put a finger to it.

The author uses a lot of words and references to make critical conclusions that they do disclaim aren't expert.

Rather than dismissive, I see it as effort intensive. The conclusions can be negative, but they've spawned so much discussion which i think is great.

I wasn't even hyping it though. I shared it among friends to spark discussion. Sure, there's some hyperbole, but I found it thought provoking.

(FYI, I didn't downvote your comment)

I wouldn’t necessarily read a lengthy blog post either just because some friend recommended it to me, and conversely I wouldn’t expect a friend to necessarily read it if I was recommending it without being prompted for recommendations. There needs to be some additional incentive and/or interest.

Also, I’m reading this comment thread instead of TFA because I didn’t find the previous part I read that great. And I’m not an AI proponent, more of an AI skeptic.

I didn't provide much context but, 1) I've had deep conversations with these friends for years based on long articles or videos, and 2) I recommend maybe one or two long form items per year and they used to always review them without, "TLDR?"

So my main concern here is that my experience may be a microcosm of the shallowing of discussions correlated with some people's increased use of AI. That worries me.

It's more of a meta point to me. I get that this series isn't landing for some people, yourself included, but the meta-observation is that given something of roughly equal substantiveness as before, these friends' motivations for long form content and discussion seem to have atrophied, perhaps largely due to the addition of the AI summary reality cipher to their lives.

Of course, correlation isn't causation. Maybe they both just got older and more lazy, but given their reliance on AI summaries in other debates happening recently, I'm worried.