You might just overhype this blog.
I read one of his last week? and didn't like it that much. I read this one despite it because its quite high on hn for whatever reason.
I don't think everything is lies and i don't like how he thinks a LLM is just some bullshit machine.
Its also waaaay to early to even understand were this is going. We as humans have never had that much compute and used it this particular way. It could literlay be the road to a utopia or dystopia. But its very crazy to experience it.
His article series feels so negative and dismissive, that i'm not taking anything from it.
There is so much more research, money and compute behind this AI topic right now, every week or two weeks something relevant better/new comes out of this. From 2d, 3d models, new LLM versions, smaller LLms, faster inferencing (Nvidias Nemotron), we don't know how this will continue.
And the weird thing is that he clearly knows plenty about LLMs but it feels so negative dismissive, hard to put a finger to it.
The author uses a lot of words and references to make critical conclusions that they do disclaim aren't expert.
Rather than dismissive, I see it as effort intensive. The conclusions can be negative, but they've spawned so much discussion which i think is great.
I wasn't even hyping it though. I shared it among friends to spark discussion. Sure, there's some hyperbole, but I found it thought provoking.
(FYI, I didn't downvote your comment)
I wouldn’t necessarily read a lengthy blog post either just because some friend recommended it to me, and conversely I wouldn’t expect a friend to necessarily read it if I was recommending it without being prompted for recommendations. There needs to be some additional incentive and/or interest.
Also, I’m reading this comment thread instead of TFA because I didn’t find the previous part I read that great. And I’m not an AI proponent, more of an AI skeptic.
I didn't provide much context but, 1) I've had deep conversations with these friends for years based on long articles or videos, and 2) I recommend maybe one or two long form items per year and they used to always review them without, "TLDR?"
So my main concern here is that my experience may be a microcosm of the shallowing of discussions correlated with some people's increased use of AI. That worries me.
It's more of a meta point to me. I get that this series isn't landing for some people, yourself included, but the meta-observation is that given something of roughly equal substantiveness as before, these friends' motivations for long form content and discussion seem to have atrophied, perhaps largely due to the addition of the AI summary reality cipher to their lives.
Of course, correlation isn't causation. Maybe they both just got older and more lazy, but given their reliance on AI summaries in other debates happening recently, I'm worried.