> Requirements documents that were once a page are now twelve.
man I see this on Jira a PM or BA is like "yeah I'll write that AC for you" giant bullet list filled in a bunch of emojis and checkmarks
> Requirements documents that were once a page are now twelve.
man I see this on Jira a PM or BA is like "yeah I'll write that AC for you" giant bullet list filled in a bunch of emojis and checkmarks
Does anyone know where that style came from? Did it become popular in listicles or on github or something? Or is there one person deep inside OpenAI or Anthropic who built the synthetic data pipeline and one day made the decision on a whim to doom us to an eternity of emoji bullet points?
I think it likely performed well in A/B preference tests with chat users.
I've noticed Claude does far fewer listicles than ChatGPT. I suspect that they don't blindly follow supervised learning feedback from chats as much as ChatGPT. I get Apple vs Google design approach from those two companies, in that Apple tends not to obsess over interaction data, instead using design principles, while Google just tests everything and has very little "taste."
In general I feel like the data approach really blinds people to the obvious problem that "a little" of something can be preferable while "a lot" of the same is not. I don't mind some bullet points here and there but when literally everything is in bullet points or pull quotes it's very annoying. I prefer Claude's paragraph style.
I suppose the downside is that using "taste" like Apple does can potentially lead a product design far, far away from what people want (macOS 26), more so than a data approach, whereas a data approach will not get it so drastically wrong but will never feel great.
I’m given to understand that Anthropic uses something called Constitutional AI, where there is a central document of desirable and undesirable qualities (as well as reinforcement learning) whereas OpenAI relies more heavily on direct human feedback and rating with human trainers evaluating responses and the model conforming to those preferences.
I also much prefer the output of Claude at present.
Yeah and for much of the HN crowd, we aspire to have better tastes than the average. So if the supervised learning uses average human trainers it will most likely be seen as having poor taste for much of HN.
Speak for yourself my taste is average and I aspire for it to remain so.
I aspire to improve the average. Which I can do either by being much better than average, or by improving everyone else just a little.
Eh, Facebook today is farther from what anybody "wants" than macOS 26, and Facebook is about as blindly data-driven as they come.
Turns out you can get away with a lot when you have a quasi-monopoly on an addictive product, and you buy out your realistic competitors...
I think the “taste” approach at Apple died with Steve Jobs.
There was a time when also Claude would absolutely fill code with emojis, which is why now their system prompt has
> Claude does not use emojis unless the person in the conversation asks it to
I think it's funny how we are all tweaking LLM output by adding instructional tokens instead of, say, finding a vector that indicates "user asked for emojis", and forbidding emoji tokens in the sampling unless that vector passes a threshold.
I first noticed it when Notion became popular.
All of the PMs I interacted with across companies started using Notion for everything at the same time. Filling Notion documents with emojis was the style of the time.
This slightly pre-dated AI tools becoming entirely usable for me.
Was going to say the same
Notion-core
Insert the grug IQ curve meme here. Some folks really like to hyper-optimize on tooling side quests.
It's the style of "blazing fast library made with :heart: in rust :crab:" that was popular in github README.md. My guess is that because the models are told to use md they overfit to the style of md documents too.
First saw it in overly peppy Rails libraries and using gitmoji more than 10 years ago.
Imagine how much work that all took... carefully colourizing your CLI.... and now it just gets spat out
Both predate common use of LLMs, unless my memory is even more shaky than usual on this. I'm sure I saw them appear a fair amount on GitHub and related project pages, but I couldn't tell you more specifically how they started & grew.
Somehow they must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because I don't remember them being that common and LLMs seem to love spewing them out. Or perhaps it is a sign of the Habsburg problem: people asked LLMs to produce README files like that because they'd seen the style elsewhere, it having spread more organically at first, and the timing was just right for lots of those early examples to get fed back into training data for subsequent models.
It was an annoying way of writing on places like LinkedIn and marketing copy for 3 or 4 years before LLMs appeared on the scene. I remember realising that I can't read them (my brain jumps between the words and the picture making it hard to focus on the content) before AI appeared.
You're not supposed to read the Jira ticket. You're supposed to paste the link along with instructions for your Claude agent to "do this ticket, no mistakes," then raise an MR for whatever it writes. The text is a wire protocol between agents. If a PM doesn't care enough about the requirements to write, or even read them, then would they even notice if the code works or not? Why would they care about that? What does "works" even mean if no human knows the spec?
How quickly we become reverse centaurs.
> then would they even notice if the code works or not?
it's literally their job to ship functional product features...
Everyone's job is to please their manager. Their job is shipping functional product features only if that's what their manager likes. In functional companies, that should be the case. There aren't many functional companies.
> Everyone's job is to please their manager.
Indeed. I've spent my professional career seeking out positions at companies of increasing prestige and technical renown, each with a higher reputation for professionalism and performance than the last. And yet this invariant has held in every position.
As far as I can tell, the only difference between each company has been the quality of the manager I was supposed to please, which I have noticed (perhaps predictably) is not strongly correlated with the company's reputation or success.
In my last company, what my manager liked was an increase in AI adoption metrics, because that’s what his boss likes.
That is the current fad, so that is what a lot of bosses like. There have been different fads in the past, there will be different ones in the future. Some of the fads have a useful core that remains today, some of them are completely gone. All of them were overhyped at the start.
Don't forget that they're also functionally structured. The managers don't own products or features, they manage functions (engineering, sales, design). And in practice, they usually only manage people, with little control over the function. So the managers aren't particularly interested or tied to shipping product features. The PM maybe, but they don't have reports or own much.
> And in practice, they usually only manage people …
I usually differentiate between real managers who exist to make decisions, versus those who manage people. The latter are “overseers” not managers.
We need to make companies financially liable for data leaks.
The practical part of their job for them is to show up and to get paid.
Who cares about features or functional - of whether they even know what functional means in that case?
That's how it looks more and more...
God I hate the emoji and checkmark usage so much. It feels so try-hard cutesy.
Just give me normal bulleted items, I can read.
I like them. It tells very clearly how much effort went into someone's work.
I like them even more on code comments. It tells _precisely_ how much effort went into the pull request, so I don't spend time reviewing lazy work.
It does not at all indicate the effort that went into doing the thing. Clearly not.
I propose that what you enjoy is having a token of the appearance of effort, easily constructed and easily observed and easily suitable for low-effort handling of these proxy objects for actual work.
I think you’re missing the sarcasm in their comment.
They’re saying that the emoji usage is telling them that very little effort was put into the PR and that they’ll treat it accordingly.
Haha! Thanks!!!
My apologies!, sincerely.
(If only the message I was responding to had had emojis and checkmarks for me to efficiently process it!!!!)
So you just rubber-stamp the lazy work? What else can you do when this PR is assigned to you specifically for reviewing?
Recently I reviewed some vibe-coded stuff and sent a list of issues and suggestions to the “author,” figuring he’d read it and then go through each one with Claude until fixed.
Instead he didn’t read it at all, and just threw the whole thing at Claude Code as a big prompt. The result was… interesting!
This is happening with coworkers now. It’s honestly insulting.
They put up a PR with all the obvious tells, the markdown table of files that changed, the description that basically parrots back things the human obviously wanted them to stress in the task (“this implements a secure, tested (no regressions) implementation of a Foo…”), and the code is an absolute mess of one-off functions placed in any random file with no thought to the way the codebase is actually organized.
Then I give feedback after spending like an hour going through their 2000 line change, and then here comes back an update with a very literal interpretation of my feedback that clearly doesn’t really understand what I was even saying. Complete with code comments that parrot back what I said (“// Use the expected platform abstractions for conversion (not bespoke methods”).
Reviewing coworkers PR’s feels like I’m just talking to the LLM directly at this point, but with more steps and I have less control over the output.
The last place I worked for, if it happened with someone new in the company or the team, I would find a polite way to say "do your job and fix this shit" and it worked.
Some people have put me on their blacklists after these interactions, sure, but they're the exact people I don't want to work with again. The important thing here is that I've never done someone else's work for free.
I guess they just close the PR.
You tell Claude to review it and if it breaks something you blame Claude. No one can get mad at you for it because they don't want to look like luddites.
I wonder if we humans are already checking out from PR reviews from human effort that we've misjudged as AI. we are in so much trouble! lol
Lazy or efficient? A dev could spend an hour on something or 10 mins, if the outcome is the same what's does it matter?
Because the reviewer ends up doing the real work actually checking it works.
The laziness is offloading work down the line.
That has nothing to do with using AI, if the dev didn't check their work then that is being a bad dev.
That’s what this whole thread is about. Appearances of productivity, laziness, and the offloading of real work downstream by sending of “looks good enough” ai generated work.
Checkmarks as bullets on progress/comparison lists I really like, assuming you mean //. The emoji properly put me off looking deeper into whatever it is that I am looking at unless I was really interested to start with.
Both predate common use of LLMs, unless my memory is even more shaky than usual on this, but must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because LLMs seem to love spewing them out.
seriously! it feels so over the top.