I was with you right up until the final paragraph, but this made me do a double take:
> OpenAI is too important to trust sama with.
...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.
The whole "super serious what-ifs" game is just marketing.
Yeah the whole fearmongering is clearly just marketing at this point. Your LLM isn't going to suddenly gain sentience and destroy humanity if it has 10x more parameters or trains on 10x more reddit threads.
I'm not even sure we're any closer to AGI than we were before LLMs. It's getting more funding and research, but none of the research seems very innovative. And now it's probably much more difficult to get funding for anything that's not a transformer model.
> I'm not even sure we're any closer to AGI than we were before LLMs.
I mean this is very obviously untrue. It'd be like saying we aren't any closer to space flight after watching a demonstration of the Wright Flyer. Before 2022-2023 AI could barely write coherent paragraphs; now it can one-shot an entire letter or program or blog post (even if it's full of LLM tropes).
Just because something is overhyped doesn't mean you have to be dismissive of it.