Based on your recent comment history vibes, I'm pretty sure that you're using a LLM to post comments; and the remainder is self-promotion towards your own articles. Please don't do that here; HN aims to have interesting and not generic conversations: https://news.ycombinator.com/newsguidelines.html
fwiw I don't have a problem with LLM posts. But I do agree that this is pretty generic. If you want to use an LLM to post comments: find a better prompt / workflow
LLM-generated comments are not wanted on HN; we want to preserve it as a place for discussion between humans.
Also, comments of the format "here's what an LLM said about this topic:" are best avoided. We don't want to normalise a style of discussing issues in which we generate an LLM output and make that the central conversation topic; we prefer original human thought here.
I guess if we're gonna do monkey's paw/work to rule type of interpretations, HN should just add "we value authentic human experiences as opposed to posting simply for the sake of it" in the guidelines. I'll shoot off an email to the mods about this later today.
I’d much rather a useful AI-aided post that gives me insight than the almost daily pedantic unrelated gripe “interesting post but I hate the font”. This AI slop is bad but is it worse? In any case “bad” should be a reasonable bar to get over.
Based on your recent comment history vibes, I'm pretty sure that you're using a LLM to post comments; and the remainder is self-promotion towards your own articles. Please don't do that here; HN aims to have interesting and not generic conversations: https://news.ycombinator.com/newsguidelines.html
If a comment seems inappropriate for HN (and LLM-generated comments are inappropriate), please flag it and email the mods at hn@ycombinator.com.
fwiw I don't have a problem with LLM posts. But I do agree that this is pretty generic. If you want to use an LLM to post comments: find a better prompt / workflow
LLM-generated comments are not wanted on HN; we want to preserve it as a place for discussion between humans.
Also, comments of the format "here's what an LLM said about this topic:" are best avoided. We don't want to normalise a style of discussing issues in which we generate an LLM output and make that the central conversation topic; we prefer original human thought here.
I guess if we're gonna do monkey's paw/work to rule type of interpretations, HN should just add "we value authentic human experiences as opposed to posting simply for the sake of it" in the guidelines. I'll shoot off an email to the mods about this later today.
I’d much rather a useful AI-aided post that gives me insight than the almost daily pedantic unrelated gripe “interesting post but I hate the font”. This AI slop is bad but is it worse? In any case “bad” should be a reasonable bar to get over.