Am I the only one who feels the comments here don't sound organic at all?

No I felt the same way, they're exactly like the usual LLM bot comment where a LLM recap ops and ends with an platitude or witty encouragement.

But all the accounts are old/legit so I think that you and me have just become paranoid...

I think part of it is that some of us are not used to reading (or writing) praise, apart from seeing the over-praising of LLMs, such that seeing praise in the wild makes us think of LLMs. This being used to lack of praise is partly reinforced by certain tech cultures (e.g. keeping ego out of code review, this implies not wasting time on personal praise/criticism but keeping remarks focused on the technical, and keeping in mind the distinction between technical and non-technical critique).

I have become oversensitive to this, and my brain is probably generating a lot of false positives. I don't think it's necessarily the case here, but I've wondered if people who use LLMs a lot take over some of its idiosyncrasies and in a way start sounding like one a bit. A strange side effect is that I've come to appreciate text with grammatical errors, videos where people don't enunciate well etc because it's a sign that it's human created content.

[flagged]

[deleted]

When you use LLMs all day, their writing style rubs off on you. From wording to structure.

It's like when you interact with any other piece of language oriented media.

I think it's more people being fascinated by this curious architectural detail. I imagine it's fascinating to people who are not exposed to the intricate details of computer architecture, which I assume is the vast majority here. It's a glimpse into a very odd world (which is your day-to-day work in the HFT field, but they rarely talk about this, and much less in such big words).

TBH, I didn't watch the video because the title is too click-baity for me and it's too long. Instead, I looked at the benchmark results on the Github page and sure, it's fascinating how you can significantly(!) thin the latency distribution, just by using 10× more CPU cores/RAM/etc. Classic case of a bad trade-off.

And nobody talked about what we use RAM for, usually: Not to only store static data, but also to update it when the need arises. This scheme is completely impractical for those cases. Additionally, if you really need low latency, as others pointed out, you can go for other means of computation, such as FPGAs.

So I love this idea, I'm sure it's a fun topic to talk about at a hacker conference! But I'm really put off by the click-baity title of the video and the hype around it.

[deleted]

You're absolutely right

You're absolutely right to call this out. No humans, no emotion, no real comments - just LLM slop.

In all seriousness, agreed. The top comment at time of this writing seems like a poor summarizing LLM treating everything as the best thing since sliced bread. The end result is interesting, but neither this nor Google invented the technique of trying multiple things at once as the comment implies.

I don’t see anything unusual

No, something is funny here. In the previous submission (https://news.ycombinator.com/item?id=47680023) the only (competently) criticizing comment (by jeffbee) was downvoted into oblivion/flagged.

Well he veered off of the technical and into the personal so I'm not surprised it's dead. But yeah something feels weird about this comment section as a whole but I can't quite put my finger on it.

I think rather than AI it reminds me of when (long before AI) a few colleagues would converge on an article to post supportive comments in what felt like an attempt to manipulate the narrative and even at concentrations that I find surprisingly low it would often skew my impression of the tone of the entire comment section in a strange way. I guess you could more generally describe the phenomenon as fan club comments.

It is one of the few instances were the reddit discussion seems more normal/indepth. See the longer comments here:

https://www.reddit.com/r/programming/comments/1sgtkdf/tailsl...

There are a few glazing comments there too though.

> Well he veered off of the technical and into the personal so I'm not surprised it's dead.

I don't know what he posted, but it is easy to see how a small fan group around Laurie can form?

She is an attractive girl not afraid to be cute (which is done so seldom by women in tech that I found a reddit thread trying to triangulate if she is trans. I am not posting that to raise the question, but she piques peoples interest) plus the impressively high effort put into niche topics PLUS the impressively high production value to present all that.

[deleted]

it was flagged because it was unnecessarily rude. nothing "funny" going on (with that comment chain at least).

i would note that it also appears to be wrong, reading laurie's reply, though i am not an expert. rude + wrong is a bad combo.

the next comment by jeffbee is also quite rude, and ignores most of laurie's reply in favor of insulting her instead. i dont think it is a mystery why jeffbee's comments were flagged...

Thank you I was picking up on that too. Maybe she has fans here or something but the vibe is off.