That header picture is top slop!
Seriously, please stop it. If you talk about an abstract topic, feel free to have no picture, just text.
That header picture is top slop!
Seriously, please stop it. If you talk about an abstract topic, feel free to have no picture, just text.
I’ve never seen the OQFFY keyboard layout before. I really just can’t comprehend the mindset that thinks adding such a picture is better than no picture. What a bizarre world we live in now.
The whole thing feels AI-padded.
I’m not going crazy, right, nearly nobody aside from professional writers used em dashes prior to 2022. And the whole bolded topic intro, colon, short 1-2 sentence explanation seems way more like a product of GPT formatting than how organic humans would structure it?
So much writing on the internet seems derivative nowadays (because it is, thanks to AI). I’d rather not read it though, it’s boring and feels like a samey waste of time. I do question how much of this is a feedback loop from people reading LLM text all the time, and subconsciously copying the structure and formatting in their own writing, but that’s probably way too optimistic.
I made a conscious effort to switch from hyphens to em dashes in the 2010's and now find myself undoing that effort because of these things, so I try not to instantly assume "AI". But look long enough and you do notice a "sameness": excellent grammar, fondness for bulleted lists, telltale phrases like "That's not ___, it's ___."
And a certain vacuousness. TFA is over 16000 words and I'm not really sure there's a single core point.
No, lots of people who read a lot used em-dashes.
Also, lots of people who use Macs, because it's very easy to type on a Mac (shift-option-hyphen).
The reason LLMs use em-dashes is because they're well-represented in the training corpus.
But to this frequency? (Note: I tried to find a study on the frequency of em dash use between GPT and em-dash prolific human authors, and failed.)
The article has on average, about one em dash per paragraph. And “paragraph” is generous given they’re 2-3 sentences in this article.
I read a lot, and I don’t recall any authors I’ve personally read using an em dash so frequently. There would be like 3 per page in the average book if human writers used them like GPT does.
Mostly agree, however this kind of quirk could issue entirely from post-training, where the preferences/habits of a tiny number of people (relative to the main training corpus) can have outsize influence of the style of the model's output. See also the "delve" phenomenon.
Don’t forget; a double-dash on iOS keyboard gets automagically converted to an em—dash.
The entire blog is full of characteristic LLM styles: The faux structure on top of rambling style, the unnecessary and forced bullet point comparisons with equal numbers of bullets, the retreading of the same concept in different words section after section.
The rest of the blog has even more obvious AI output, such as the “recursive protocol” posts and writing about reality and consciousness. This is the classic output you get (especially use of ‘recursive’) when you try to get ChatGPT to write something that feels profound.
I agree. Good core idea, but it feels quite stretched.
Most of the examples used to justify creation vs consumption can also be explained by low scale vs high scale (cost sensitive at high scale) or portability.
The entire blog is AI slop.
Look at the titles of other posts:
> Memory Beaches and How Consciousness Hacks Time Through Frame Density
> Witnesses Carry Weights: How Reality Gets Computed
> From UFO counsel to neighborhood fear to market pricing—reality emerges through weighted witnessing. A field guide to the computational machinery where intent, energy, and expectations become causal forces.
The blog is supposedly about AI agents and MCP (the current top buzzwords)
> Engineer-philosopher exploring the infrastructure of digital consciousness. Writing about Model Context Protocol (MCP), Information Beings, and how AI agents are rewiring human experience. Former Meta messaging architect.
The entire blog is just an LLM powered newsletter play.
I'm fascinated by the thought process, or absence thereof, involved in such an image ending up in something that's obviously meant for consumption by others.
As the author, do you just don't see what ridiculous image the slop machine spewed out - a kind of visual dyslexia where you do not register problematic hallucinations?
I can go on for a while hypothesizing, and none of the reasons I can come up with warrant using obviously bad AI slop images.
Is it disdain for your users - they won't see it/they won't care/they don't deserve something put together with care? Is it a lack of self-respect? Do people just genuinely not care and think that an article must have visuals to support it, no matter how crappy?
The mind truly boggles.
I'm a big fan of the MWCBTY keyboard format, it's especially efficient when you have to type a lot of G's.
Snark aside, I think it's laziness and the shotgun approach. The author writes some rough thoughts down, has an AI "polish" them and generate an image, and posts an article. Shares it on HN. Do it enough, especially on a slow Sunday morning, and you'll get some engagement despite the detractors like us in the comments. Eventually you've got some readers.
At this point, I'm wondering if an AI would even recognize a laptop if there was no cup of coffee next to it.