> "Requirements documents that were once a page are now twelve. Status updates that were once three sentences are now bulleted summaries of bulleted summaries. Retrospective notes, post-incident reports, design memos, kickoff decks: every artifact that can be elongated is, by people who do not read what they produce, for readers who do not read what they receive."
Great article. The "elongation" of workplace artifacts resonated with me on such deep level. Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays. Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).
So now the "productivity-gain bottleneck" is people who still care enough to review manually.
This paragraph hit home with me as well. I work at a large tech company that's a household name and the practice of using AI to pad out design documents has become totally out of control over the last 4 or 5 months. Writing documentation is arduous and a little painful, which as it turns out is a good thing as it incentivizes the writer to be as succinct as possible. Why the fuck should I -- along with five other engineers -- bother to read and review your design if you didn't even bother to write it?
Taking a distance uni class now to maybe swap away from dev work and my submitted works that are to be reviewed and commented on by other students all come back with AI generated feedback and it's making me go insane. If I needed AI feedback I'd go ask an AI but for any communication now it's a cointoss if you're getting a human reply.
/rant
I wonder could you ask for a video instead of a text, like a screen recording with a voice recorder.
Harder to fake.
I'm starting to see pushback for this. I know a Product Manager that was fired for padding his documentation with AI to the point there were mistakes and wasted work due to AI hallucinations.
I see it even on my GitHub project, issues and pull request comments get longer, responses get longer, all generated by ai and read by ai. This text is no longer for human consumption, but to provide context to ai.
See also this video from Nate B Jones: https://youtu.be/FDkvRl1RlT0?si=WUK2WJTXvKAWKD0r
I've seen some of this as well. It's OK to send me an agentic screed if it's just going to be consumed by my agent, but I want a nicely written summary up top that was made by you... I'm starting to value poor grammar, typos, and other signs of legitimacy
[dead]
What I find particularly irritating is that you can actually prompt the fcking AI to be short.
> Writing documentation is arduous and a little painful, which as it turns out is a good thing as it incentivizes the writer to be as succinct as possible.
It takes more effort to be brief, even for humans. Good documentation writers were always brief.
Simply saying "be concise" isn't enough. I often have Claude write first drafts for me (which, for the record, I review completely and rewrite as needed before publishing) and even when told to be concise, there are times when what comes out is unusably long and wordy.
I work under the assumption that the primary audience of everything I write at work is an AI. Managers will take what I send and have it summarized and evaluated by some chatbot or agent. (Of course, I cannot send them the summary myself.)
So like ATS checkers for resumes, I find myself needing an AI checker for my text.
Ultimately, we will have AI write everything for another AI to parse, which will be a massive waste of energy. If only there was some agreed-upon set of rules, structures, standards, and procedures to facilitate a more efficient communication...
If that is your manager, do so, sure. But make sure your manager is "such a manager".
If I was your manager, and you sent me your seventeen page AI generated thing coz you think I'm just gonna summarize anyway and I expect something long: You misread me.
I make a point all the time to everyone that won't listen, to not send me walls of text. I'm not gonna read them. I'm gonna ignore them, close your bug reports until I can understand them because you spent the time to make them short and legible. If you use AI for that, I don't care. But I better have something short and that when I read it makes actual sense and when I verify it, holds up. If I wanted to just ask AI, I'd do it myself. You have to "value add" to the AI if you want to be valuable yourself.
I agree. I send 2 sentence replies to most things my bosses boss sends me. He’s near retirement, dude doesn’t want me to send him a book. He knows the thinking under the work our team is doing is solid.
The only time I send something longer is if it’s a postmortem for some prod issue, which I write by hand.
I use AI every day, often multiple agents at once, but knowing when it’s appropriate and when I need to be the one thinking really hard about something.
[flagged]
I go through this with my vendor budgets and contract negotiations right now. We are encouraged to put all their proposals in AI and have it refute each point. I know for a fact they are putting my negotiations in their own AI and having it counter-propose my points. It's an arms race of my AI fighting against their AI. Where does it end.
Where is uncertain, but how is: badly.
It’s the Red Queen’s Race, where we all run as fast as we can to stay in exactly the same place.
If You Don't Know What You're Doing, Do It Faster
Ends when you tell them "this AI shit is ridiculous so we are choosing a different vendor"
This is the focus of my new startup, which uses a single-layer model to transform bullet points into bullet points. Please invest in IdentityMatrixLLM, LLC, etc.
I’m too lazy to tell the AI what I want to say, then copy and send its output.
I just type what I want to say and hit send. YOLO
> I just type what I want to say and hit send. YOLO
Made me smile. Perhaps the new term for making a human hand-written reply is that I didnt use AI … “I YOLOed it”.
I'll argue there's potentially a standards based advantage at the end when this all shakes out.
It will probably take a couple hundred years but I'm pretty sure I'm right about this :)
I'm also sure about things that will happen after me and my whole audience are dead.
I have a hard time trying to find any reasons for the S̶k̶y̶n̶e̶t̶ owners of the Skynet not to get rid of that walking bipedal inefficiency called human.
API or die /s.
Seriously, though, fuck that shit!..
[dead]
> Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).
I feel the loss of this signal acutely. It’s an adjustment to react to 10-30 page “spec” choc-a-block with formatting and ascii figures as if it were a verbal spitball … because these days it likely is.
It is worse because the signal is buried in the noise.
> Requirements documents that were once a page are now twelve.
man I see this on Jira a PM or BA is like "yeah I'll write that AC for you" giant bullet list filled in a bunch of emojis and checkmarks
Does anyone know where that style came from? Did it become popular in listicles or on github or something? Or is there one person deep inside OpenAI or Anthropic who built the synthetic data pipeline and one day made the decision on a whim to doom us to an eternity of emoji bullet points?
I think it likely performed well in A/B preference tests with chat users.
I've noticed Claude does far fewer listicles than ChatGPT. I suspect that they don't blindly follow supervised learning feedback from chats as much as ChatGPT. I get Apple vs Google design approach from those two companies, in that Apple tends not to obsess over interaction data, instead using design principles, while Google just tests everything and has very little "taste."
In general I feel like the data approach really blinds people to the obvious problem that "a little" of something can be preferable while "a lot" of the same is not. I don't mind some bullet points here and there but when literally everything is in bullet points or pull quotes it's very annoying. I prefer Claude's paragraph style.
I suppose the downside is that using "taste" like Apple does can potentially lead a product design far, far away from what people want (macOS 26), more so than a data approach, whereas a data approach will not get it so drastically wrong but will never feel great.
I’m given to understand that Anthropic uses something called Constitutional AI, where there is a central document of desirable and undesirable qualities (as well as reinforcement learning) whereas OpenAI relies more heavily on direct human feedback and rating with human trainers evaluating responses and the model conforming to those preferences.
I also much prefer the output of Claude at present.
Yeah and for much of the HN crowd, we aspire to have better tastes than the average. So if the supervised learning uses average human trainers it will most likely be seen as having poor taste for much of HN.
Speak for yourself my taste is average and I aspire for it to remain so.
I aspire to improve the average. Which I can do either by being much better than average, or by improving everyone else just a little.
Eh, Facebook today is farther from what anybody "wants" than macOS 26, and Facebook is about as blindly data-driven as they come.
Turns out you can get away with a lot when you have a quasi-monopoly on an addictive product, and you buy out your realistic competitors...
I think the “taste” approach at Apple died with Steve Jobs.
There was a time when also Claude would absolutely fill code with emojis, which is why now their system prompt has
> Claude does not use emojis unless the person in the conversation asks it to
I think it's funny how we are all tweaking LLM output by adding instructional tokens instead of, say, finding a vector that indicates "user asked for emojis", and forbidding emoji tokens in the sampling unless that vector passes a threshold.
I first noticed it when Notion became popular.
All of the PMs I interacted with across companies started using Notion for everything at the same time. Filling Notion documents with emojis was the style of the time.
This slightly pre-dated AI tools becoming entirely usable for me.
Was going to say the same
Notion-core
Insert the grug IQ curve meme here. Some folks really like to hyper-optimize on tooling side quests.
It's the style of "blazing fast library made with :heart: in rust :crab:" that was popular in github README.md. My guess is that because the models are told to use md they overfit to the style of md documents too.
First saw it in overly peppy Rails libraries and using gitmoji more than 10 years ago.
Imagine how much work that all took... carefully colourizing your CLI.... and now it just gets spat out
Both predate common use of LLMs, unless my memory is even more shaky than usual on this. I'm sure I saw them appear a fair amount on GitHub and related project pages, but I couldn't tell you more specifically how they started & grew.
Somehow they must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because I don't remember them being that common and LLMs seem to love spewing them out. Or perhaps it is a sign of the Habsburg problem: people asked LLMs to produce README files like that because they'd seen the style elsewhere, it having spread more organically at first, and the timing was just right for lots of those early examples to get fed back into training data for subsequent models.
It was an annoying way of writing on places like LinkedIn and marketing copy for 3 or 4 years before LLMs appeared on the scene. I remember realising that I can't read them (my brain jumps between the words and the picture making it hard to focus on the content) before AI appeared.
You're not supposed to read the Jira ticket. You're supposed to paste the link along with instructions for your Claude agent to "do this ticket, no mistakes," then raise an MR for whatever it writes. The text is a wire protocol between agents. If a PM doesn't care enough about the requirements to write, or even read them, then would they even notice if the code works or not? Why would they care about that? What does "works" even mean if no human knows the spec?
How quickly we become reverse centaurs.
> then would they even notice if the code works or not?
it's literally their job to ship functional product features...
Everyone's job is to please their manager. Their job is shipping functional product features only if that's what their manager likes. In functional companies, that should be the case. There aren't many functional companies.
> Everyone's job is to please their manager.
Indeed. I've spent my professional career seeking out positions at companies of increasing prestige and technical renown, each with a higher reputation for professionalism and performance than the last. And yet this invariant has held in every position.
As far as I can tell, the only difference between each company has been the quality of the manager I was supposed to please, which I have noticed (perhaps predictably) is not strongly correlated with the company's reputation or success.
In my last company, what my manager liked was an increase in AI adoption metrics, because that’s what his boss likes.
That is the current fad, so that is what a lot of bosses like. There have been different fads in the past, there will be different ones in the future. Some of the fads have a useful core that remains today, some of them are completely gone. All of them were overhyped at the start.
Don't forget that they're also functionally structured. The managers don't own products or features, they manage functions (engineering, sales, design). And in practice, they usually only manage people, with little control over the function. So the managers aren't particularly interested or tied to shipping product features. The PM maybe, but they don't have reports or own much.
> And in practice, they usually only manage people …
I usually differentiate between real managers who exist to make decisions, versus those who manage people. The latter are “overseers” not managers.
We need to make companies financially liable for data leaks.
The practical part of their job for them is to show up and to get paid.
Who cares about features or functional - of whether they even know what functional means in that case?
That's how it looks more and more...
God I hate the emoji and checkmark usage so much. It feels so try-hard cutesy.
Just give me normal bulleted items, I can read.
I like them. It tells very clearly how much effort went into someone's work.
I like them even more on code comments. It tells _precisely_ how much effort went into the pull request, so I don't spend time reviewing lazy work.
It does not at all indicate the effort that went into doing the thing. Clearly not.
I propose that what you enjoy is having a token of the appearance of effort, easily constructed and easily observed and easily suitable for low-effort handling of these proxy objects for actual work.
I think you’re missing the sarcasm in their comment.
They’re saying that the emoji usage is telling them that very little effort was put into the PR and that they’ll treat it accordingly.
Haha! Thanks!!!
My apologies!, sincerely.
(If only the message I was responding to had had emojis and checkmarks for me to efficiently process it!!!!)
So you just rubber-stamp the lazy work? What else can you do when this PR is assigned to you specifically for reviewing?
Recently I reviewed some vibe-coded stuff and sent a list of issues and suggestions to the “author,” figuring he’d read it and then go through each one with Claude until fixed.
Instead he didn’t read it at all, and just threw the whole thing at Claude Code as a big prompt. The result was… interesting!
This is happening with coworkers now. It’s honestly insulting.
They put up a PR with all the obvious tells, the markdown table of files that changed, the description that basically parrots back things the human obviously wanted them to stress in the task (“this implements a secure, tested (no regressions) implementation of a Foo…”), and the code is an absolute mess of one-off functions placed in any random file with no thought to the way the codebase is actually organized.
Then I give feedback after spending like an hour going through their 2000 line change, and then here comes back an update with a very literal interpretation of my feedback that clearly doesn’t really understand what I was even saying. Complete with code comments that parrot back what I said (“// Use the expected platform abstractions for conversion (not bespoke methods”).
Reviewing coworkers PR’s feels like I’m just talking to the LLM directly at this point, but with more steps and I have less control over the output.
The last place I worked for, if it happened with someone new in the company or the team, I would find a polite way to say "do your job and fix this shit" and it worked.
Some people have put me on their blacklists after these interactions, sure, but they're the exact people I don't want to work with again. The important thing here is that I've never done someone else's work for free.
I guess they just close the PR.
You tell Claude to review it and if it breaks something you blame Claude. No one can get mad at you for it because they don't want to look like luddites.
I wonder if we humans are already checking out from PR reviews from human effort that we've misjudged as AI. we are in so much trouble! lol
Lazy or efficient? A dev could spend an hour on something or 10 mins, if the outcome is the same what's does it matter?
Because the reviewer ends up doing the real work actually checking it works.
The laziness is offloading work down the line.
That has nothing to do with using AI, if the dev didn't check their work then that is being a bad dev.
That’s what this whole thread is about. Appearances of productivity, laziness, and the offloading of real work downstream by sending of “looks good enough” ai generated work.
Checkmarks as bullets on progress/comparison lists I really like, assuming you mean //. The emoji properly put me off looking deeper into whatever it is that I am looking at unless I was really interested to start with.
Both predate common use of LLMs, unless my memory is even more shaky than usual on this, but must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because LLMs seem to love spewing them out.
seriously! it feels so over the top.
I wish cultural norms around documentation would shift to "pull" rather than "push" — generating "views" of organized knowledge on the fly instead of making endless rearrangements of the same information. It's become too cheap in terms of proof of (mental) work to spray endless pages of notes, reports, memos, decks, etc. but the "documentation is good" paradigm hasn't caught up yet.
Ideally AI would minimize excessive documentation. "Core knowledge" (first principles, human intent, tribal knowledge, data illegible to AI systems) would be documented by humans, while AI would be used to derive everything downstream (e.g. weekly progress updates, changelogs). But the temptation to use AI to pad that core knowledge is too pervasive, like all the meaningless LLM-generated fluff all too common in emails these days.
I work for an "AI-native" company now and have found this to be the case.
EVERYONE (engineers, pms, managers, sales) uses Claude Code to read and write Google Docs (google workspace mcp). Ideas, designs, reports. It's too much for one person to read and, with a distributed async team, there's an endless demand for more.
So for every project there's always one super Google Doc with 50 tabs and everyone just points their claude code at it to answer questions. It's not to be read by a human, it's just context for the agent.
Everyone cranks out endless pages of slop, that everyone else then has to ingest. Anthropic collects a fee from all of you and is the only winner here.
I'm looking forward to the impending crash when the AI providers actually start charging what it costs to run these models. It's going to be a bloodbath, and it's going to be cathartic as fuck.
This is literally losing the whole process to a stochastic parrot.
They are so far removed from the process they can claim they are any % more productive and no one is able to contradict them. Call it a ‘productivity theatre’
The economic reality check is going to be devastating. It won’t be a crash of AI as a tech, it will be a crash of every ‘AI native’ company that does not even know what is their product any more.
The US reinventing the worst parts of Soviet but putting a glossy and chipper veneer on it.
To be fair, a lot of those people were stochastically parroting by themselves for years already. They are just capable to stochastically parrot more.
These companies have enough market power that they can afford to be ineffective. So they were. And they are ineffective in novel way.
the product of llms being trained on SEO fluff articles that pad out everything so they get as high in the results as possible
Yeah that was my guess as well.
I just don’t read this crap. The problem solves itself since anyone sending me that isn’t going to bother to follow up about it anyway.
Unfortunately, there is pressure to treat this stuff in good faith. Maybe the PR author really did write all this. Maybe they really did spend 6 hours writing this document.
So, I approach it in good faith, but I do get upset when people say "I'll ask claude". You need to be the intermediary, I can also prompt claude and read back the result. If you are going to hire an employee to do work on your behalf, you are responsible for their performance at the end of the day. And that's what an AI assistant is. The buck stops with you. But I don't think people understand that and that they don't understand they aren't adding value. At some point, you have to use your brain to decide if the AI is making sense, that's not really my job as the code/doc reviewer. I want to have a conversation with you, not your tooling, basically.
> If you are going to hire an employee to do work on your behalf, you are responsible for their performance at the end of the day.
So, what you are saying is that I should fire the bottom N% of underperforming agent instances?
You know, like employers do as opposed to taking any responsibility?
> I do get upset when people say "I'll ask claude"
The dude is just acting like a manager with a technical employee (agent) who does the hands-on work. If you are upset about this you should be hopping mad about the whole manager-director-VP-SVP hierarchy above this dude.
As long as each part of the hierarchy understands what they need to know at their level and what they produce, I have no problem with "the whole hierarchy".
You're saying this as if it's some rebuttal ad absurdum, when it's absolutely the case: when the higher layers don't understand what they do, we have a problem with that too, and that's been true since forever. Remember Dilbert and Office Space, and making fun of the ignorant middle managers and execs?
In this case, what we're complaining about is coders not understanding the code they ship (because some AI wrote it and they don't bother to review it or guide the AI fully).
They likely haven’t read it either, so they’ll never know you didn’t as well.
I just stopped reading my work emails and the announcement channels. Everything that actually matters either ends up DMed to me or shows up in my calendar.
> The "elongation" of workplace artifacts resonated with me on such deep level
Well put. I generally skip AI-generated PR descriptions for this reason as they tend to miss the forest for the trees. Sometimes a large change can be explained by a short yet information-rich description ("migrate to use X instead of Y", "Implement F using pattern P") that only a human could and should write.
Hah, lately I've had one particular coworker demanding in code reviews that I provide more 'detailed' MR descriptions. (All of his are clearly AI generated.)
We need to demand better from our coworkers and from ourself.
Young "AI native" coworker opens PRs with 3 screen slop description, I flagged that "I know he ain't reading all that, and therefore I ain't reading all that", so he should just give a max half-screen overview. I expect that the PR description makes sense, is correct, and have been reviewed by the person opening the PR. You can still use agents for that, but at least there is a chance with shorter descriptions that it's not completely bs.
This had me crack up!
I used to have a colleague (senior engineer) who never cared to write a single line in Pull Request descriptions, as if other people had to magically know what he meant to achieve with such changes.
Now? His PRs have a full page description with "bulleted summaries of bulleted summaries"!
My colleague had a problem with commit messages, so now they're all written by AI. I don't know what depth of hell he managed to get the prompt from, but they're all now in the format "Updated /path/to/file: fixed issue in thingamabob", which means they're all at least 200 characters long and half of it is the file path, an absolutely pointless thing to put in a commit message. The best part is that whenever you look at GitLab or GitHub, instead of seeing the commit message next to the file you just see the file name again, then the message is cut off.
> Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays.
Minimum word lengths are the greatest dis-service high school and college have ever done to future communication skills. It takes years for people to unlearn this in the workplace.
Max word counts only please. Especially now with AI making it so easy to produce fluff with no signal.
I write the words that I hear in my head, as though I am speaking. With the exception of timed, in-class essays, I always turned in papers far in excess of any minimum during high school.
In college, I took a constructive writing course because I thought "Hey, easy A!" After the second or third week, the professor told me that, while the class had a word minimum, I would also be given a separate word maximum. She said I needed to learn brevity and simplicity, before anything else.
The point being: I was able to cruise through high school with my longwindedness as a cheat code, never stressing about minimum lengths, despite my writing being crap in other ways.
Although I have regressed in the two decades since, it helped me a good deal. I am grateful to that professor for doing that.
I write a lot and have on several occasions tried dictation as an initial draft authoring step. It was trash every time.
Good for thinking through a concept but unsalvageable in the edit phase. Easier to throw away and rewrite now that you know what to say.
Nowadays I like conversation as an ideating step. Talk to a bunch of people, try to explain yourself until they get it, see what questions they ask. Sometimes in HN threads like this :)
Then write it down.
You get super high signal writing where every sentence is load bearing. I’ve had people take my documents and share them around the company as “this is how it’s done”
It can take weeks of work to produce a 500 word product vision document. And then several months to implement, even with AI.
Hmm... when I really care about the quality of something, I basically write what I think/speak, then try to edit it down by half. I don't find it unsalvageable, but the editing does require an order of magnitude more time than the initial draft of thoughts vomited into the keyboard.
> I basically write what I think/speak
Me too. Try speech to text one day, you may find that you'll use 2x the words than you do with a typed vomit draft. I was surprised
> It can take weeks of work to produce a 500 word product vision document.
Don't you get dinged as a slow performer? Management expects x5 speed on everything now that AI is available.
> Don't you get dinged as a slow performer?
No because the document is not the work. Management wants someone to figure out the solution to their problems. The document is just a step in solutioning.
Without the doc, others would have to re-do all that work if you get hit by a bus. Or you’d be stuck in endless meetings conveying the vision instead of figuring out the next problem.
Document length is inversely proportional to the quality of your thinking/insight. When you create fluff, everyone can see you didn’t do the work.
It's going to depend on the type of team and environment you work in. Probably on how senior you are as well.
If your boss asks you for specific documents and expects a quick turnaround, and you regularly take 3 weeks or whatever to produce them, then yeah probably.
If your boss generally leaves you alone to find and solve problems on your own, then probably not.
I design boardgames and it's easy to write a lot of rules. It's more difficult to write concise rules. Most of my time is spent editing rules to their absolute minimum.
"I have made this letter longer than usual, only because I have not had time to make it shorter." - Blaise Pascal
I ctr-F'd to search for this quote and am happy to see it mentioned.
Brevity is an art, and it is hard.
Reminds me of how I document procedures. I spend a significant amount of time thinking about how to write things so that I provide enough information for a Jr to understand each step (and hopefully learn something) without over explaining. Organization is also important.
I had the opposite issue. Writing was agony and every section would be written, reviewed and rewritten to get my point across; only to be tortured by a miminum word count that was 20% away after saying all i cound think of saying.
I've gotten better at phrasing myself adequately in one go. Rute mechanical memorization has also made writing itself cheaper. (read my username)
I can now yap quite adequately over text, yet i regularly find AIs at a minimum 2x as verbose as my preferred phrasing after manual word mashing.
feels like this comment could be shorter
But how is your writing fast enough that you don’t pause and drown the hearing in your head?
When writing on paper, either I will pause thinking enough, or will sometimes lose where a thought was going. I am much faster at typing than writing, so I end up with more, then edit/delete afterwards (if I feel like writing well). I am much worse at writing long-form thoughts than I was back in college, now that 99% of what I do is type.
An odd tradeoff of my verbal-based writing seems to be that I am a fairly slow reader. I read aloud in my head, albeit a bit faster than I could speak, but I still hear the words as an internal monologue.
When discussing this a few times with friends, I've learned how different everyone's experiences are when bridging thoughts=>speaking, thoughts=>writing, thoughts=>typing, and text=>thoughts (or even text=>understanding).
I'd like to see touch-typing at >60 wpm a standard attribute of adulthood again.
Same as the heavy focus on rewording in your own words: basically teaching you to plagiarise by cheating. I find it distasteful.
Even though almost copying is everywhere (patents, graphic design, business): albeit in other areas it is often applauded and less obviously deceptive.
We talk about countries copying e.g. Japan was notorious for it. I think the underlying motivation there is ownership - greedy people feeling they own everything (arts and technology). "We own that and you stole it from us" along with the entitlement of never recognizing when copying others.
Minimum word lengths were really a terrible idea and I wonder what arguments were used to get all the teachers to buy into that system.
Considering that many high school kids won’t want to put in any effort at all, how else do you convey the amount of detail and effort you expect for a given writing assignment? It’s an imperfect proxy but I can’t think of a better one.
Yeah. 1000 words is not a long essay that requires padding, and any competent teacher marks an essay with 1000 words achieved mainly by repetition and bad sentence construction much lower than one discussing the subject matter in a suitable level of detail, and probably lower than a better- written essay which gets marks deducted for only having 985 words.
Since "write an essay" can be anything from three paragraphs to a 50 page paper and the teacher probably doesn't think either is the appropriate response to the task, some sort of numerical guide is a good starting point, even if a fairly wide range is a better guide than just a minimum...
(plus actually there are real world work tasks involving composing text that fits within a certain word range, and since being concise and focused isn't AI text generation's strong suit, I'm not sure those work tasks will disappear...)
Yeah, this is seemingly the only effective proxy for "write with some amount of depth." If the word count gets BS'd then it will be obvious when reading the output.
> Yeah, this is seemingly the only effective proxy for "write with some amount of depth." If the word count gets BS'd then it will be obvious when reading the output.
My high school professors had a really good solution to this:
Minimum word lengths but you have to write the essay in class by hand. You have 2 periods.
Some of us still write a lot but having limited time and space (4 pages) really put a hard limit without saying so. In higher classes they started saying “I’m gonna stop reading after 3 pages so make sure you get to the point”
I spent 2 years (coincidentally the same teacher for two years) in high school where once a week the only thing done that period was to write an essay (by hand) on some topic/prompt given immediately before beginning.
The grading was thorough and harsh. In college I was never graded harder on writing. My writing and comprehension abilities improved dramatically over that period of time.
With rubrics, or more simply the teacher could hand out an example essay at the start of the year that conveys the style and level of detail they are looking for when they assign an essay. Then they can refer to that when they make an assignment. Implicitly that gives a word count or number of pages, but allows for marking down for "too much repetition" or "needs more detail"
The ambiguous "needs more detail" thing would lead to a lot of students making it too brief in good faith, too long in good faith and both be frustrated and angry. You can write really good mini essay on a topic. And you can write really good super long essay on the same topic.
Demanding that students mind read is not a good strategy. Specifying expected length, checking for it is a good strategy. Teacher should also check for other things - whether paragraphs logically follow, grammar, sentence structure, you name it. But dont make them guess.
A good rubric would remove a lot of this ambiguity.
When the teacher goes to grade it? If you turn in one sentence with or without a minimum your getting an F...
Many schools these days don't allow an "F" grade if the student makes any effort at all.
Source please.
Wife teaches 4th grade. They cannot give an "F" if the student turns something in. Only for completely missing work.
Have a second of critical thinking on this topic will make it abundantly obvious why this line of questioning is anti-education and anti-intellectual. You write in school to practice. No just composition, but grammar, spelling, individual sentences. Practice requires volume.
Subject yourself to a classroom of kids that you must teach to write, and throw out minimums. Will some students do fine? Sure, of course, and what of the others that turn in one sentence? That never grow? That have to go into the math class and hear their idiot parents say "why are you learning that we have calculators"
Why not have the students write more essays instead?
> Subject yourself to a classroom of kids that you must teach to write, and throw out minimums.
Strawman argument; the correct thing to do is not to throw out minimum word count and leave it at that, rather to emphasize the role of brevity and concision while still being sufficiently thorough.
It's widely understood that LOC is a poor measure for many coding purposes, so it shouldn't be controversial that word count is an equally flawed measure for prose.
This ENTIRE argument is about whether or not minimum word count is a good idea, perhaps improve your reading comprehension before pretending to know logical fallacies
Almost your entire post history is angry and confrontational, just like here, and I was also talking about whether or not word counts are a good idea, obviously; right back at you about reading comprehension.
It can help to force depth into a topic that requires it, and more expression and emotion into writing where that is of value. It also forces the writer to think more deeply about the topic and organize their thoughts.
While I hated it in high school, but think I better understand it now. I think part of the problem is they never explained the "why" or the "how", just the requirement. I wasn't able to write anything more than a page or two without extreme difficultly until college when the requirements went up to 30 pages.
In theory, someone who can write a 30 page paper could effectively distill it down to a short memo when needed, summarizing their primary point(s). Someone who can only write short memos would have a hard time writing something longer one day if/when required. I was trying to do a knowledge transfer one day, opened up Word, and just typed 20 pages on everything I knew about a tool we used heavily, but wasn't documented anywhere. I don't think I could have done that before I was forced to write those longer papers in college.
Where I encounter it at the higher education level is that academic-level research almost universally has maximum word counts or page counts rather than minimums: if you think you can get your point across in fewer words, you should. No reviewer is going to object to the paper being too short, so long as you succeeded in making your case.
John Nash's Ph.D. Thesis is notorious for being short: it's still 27 pages (typed, with hand-written equations and a whopping total of two citations) but that's an order of magnitude below average. On the other hand, most of us don't invent game theory.
Students used to minimum-word-count essays sometimes have to do some self-retraining to realize that the expectation is that you have more that you want to say than you have room to say it, and the game is now to figure out how to say more in fewer words.
Off topic, and not to diminish Nash's work, but quite famously (I thought) Von Neumann and Morgenstern did a bit of the 'inventing' too, and a bit earlier
Journalists and writers are often given a deadline and a target length. "Give me 500 words of copy by the end of tomorrow." The editor and publisher of a magazine need to get all words and graphics ready by a strict and regular deadline.
It’s easier to judge an objective output like number of words than subjective like quality.
Same as lines of code, etc.
I guess, but have you actually encountered a teacher grading an assignment solely based on word count?
I certainly wish more teachers encouraged parsimony and penalized fluff and bullshittery, but I'd be surprised to find them doing it outside of some narrow cases where the point is just to make you write something at all.
Tthey generally want to encourage their students to engage with the topic at a certain level and practice the thinking needed to research, structure, and implement an argument of a certain length. They want you to put at least 5 pounds of idea in the 5-10 pound idea bag.
If you're convinced you've hacked word economy and satisfied the assignment except for this goshdarnpeskyminimumwordcount, you're probably misunderstanding the lesson the instructor is willing to read through a bunch of bad writing to impart and cheating yourself.
That's actually the trick. If you assign word count, MLA style, grammar, you just have to look for the errors. You don't have to engage with the ideas at all, or provide conversational feedback - just cryptic notes in the margins, like "???" or "awk"
The idea was to get people to include more substance. Instead of just saying "Washington crossed the Delaware" to get students to include reasons why, impacts, further narrative, etc. IDK if it was effective or not. Probably at least a little; there's only so many ways to rewrite the same thing over and over. I know in my case though I submitted essays below the word count a few times, but since I actually included the content they were looking for I didn't have any problems
We had maximum word counts
it was only after I had to manage others that I realized the logic for a lot of these simplistic metrics and rules. they are in place to hold accountable the worst performers. a simple example is when i introduced flexible work hours. it was fine with most people, but there are always a few members that abuse the system. they stretch it to the very limit to what can be interpreted as "flexible". as a manager it posed a dilemma for me. i didn't want to take away this privilege just because of a few abusers, but it was both unfair and set bad precedents if I allowed them to get away with this. and let's say they couldn't be easily fired. most of my peers simply ended up going back to a system where people punched in and out.
Could not you just say to those few: 'you can't because I do not trust you'? You are the manager after all, your job is not to make them feel good but to make them work.
I don't think "some people on the team have privileges and others don't based on the manager's discretion" would be healthy in the long run either. Can you imagine interviewing for a team, asking about the PTO policy, and finding out that it varied like that? It would look pretty indistinguishable from "the people who that manager likes have special treatment" to me. You could hide it from prospective employees, but not knowing about it beforehand and then finding out from one of my teammates that the manager revoked their privileges (who presumably would have a chip on their shoulder about it and present the info with their own biases) would make me concerned that there was a bait-and-switch and now I'm stuck on a toxic team.
Yeah, I understand but on other hand you can't reward everyone with the same thing for different outcome. This is exactly what is happening with they pay, some people earns more, some less. People complain about it too. Do you think it is toxic too?
We people being people, and being manager when there is no outcome when everyone is happy, this is why I am not going to be manager. I just wanted to know honest opinion about how to solve it from the OP, or even if this is solvable.
I remember my first semester university writing class, when on the first day the teacher told us we had learned to pad our writing in high school, and now we were going to learn how to be short and concise because every assignment would be limited to one page.
I had a "Violence in the Political System" professor who only assigned executive summary research assignments. No more than one page.
His explanation: I don't want to read more than that, and you should be able to fit all the most important details in one page.
Great lesson.
Well, in many layers of overhead in companies people operate at the level of high schoolers, so it is no surprise unfortunately, that the output comes across like that too.
it actually insane that this sort of thing is tolerated. Its a culture thing and frankly just rude. My org is pretty AI-pilled and this type of behavior will just not fly. I need to be assured im talking to a human who is using their brain.
If I paste something from an AI into chat, I always identify it as such by saying something like "my claude instance says this:". I also don't blindly copy paste from it, I always read it first and usually edit it for brevity or tone. Feel like this should be the absolute minimum for sending AI content to a person.
Even that is pretty useless because we have no idea what context "your Claude instance" has. All you're doing is dressing up some bullshit to look authoritative.
When I started my PhD I was already really good at typesetting with LaTeX. I started to bring in fully typeset works in progress for my supervisor to read through. These proofs often had fatal flaws. He asked me to stop typesetting until after the work had been verified because it looked too convincingly correct due to being typeset.
That was about 15 years ago but I've never forgotten it. Drafts should look like drafts. Scrappy work and proofs of concept should look as such. Stop fucking with people by making your bullshit, scrappy ideas look legit. Progress is a cooperative effort. It's not about trying to make people say yes.
Can confirm. I saw some fresh out of college colleagues do this in text docs. Al nice markup, but the text content was very drafty. I always sent them back to keep the format concept-y if you are tuning the text first.
\usepackage{comicsans}
[dead]
I see it as rude as well. The literal interpretation is: "your time is worth absolutely nothing to me."
There’s people who use AI to solve problems, and then there’s people who have completely offloaded all of their thinking to LLMs. I have a manager who when asked a question won’t think even for a moment about it and will just paste paragraphs of AI generated text back.
> Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays.
A huge AI signal to me is not em dashes, not emoji, not even the "not X, it's Y" construction which oh god I'm falling into the trap right now aren't I.
It's a combination of these factors plus a tendency to fluff out the piece with punchy but vague language, often recapitulating the same points in slightly reworded ways, that sounds like... an eighth grader trying to write an impressive-sounding essay that clears the minimum word limit.
Did the bright sparks who trained these things just crack open the printer paper boxes in their parents' homes filled with their old schoolwork, and feed that into the machine to get it started?
Another commenter above this proposed a pretty compelling theory for the source of this style: SEO-inflated prose online. If the models were trained on the internet, "higher quality" content needed to be indicated to them during RL somehow. Search engine ranking is an easy-to-obtain metric that's kind of like "quality" if you squint, turn around, and lobotomize yourself. So the AIs have a high likelihood of producing the kinds of content that is rewarded by Google SEO.
That's circular though. Why does that content get ranked highly? Because it gets a lot of backlinks, long clicks, etc. So people seem to like it.
> Why does that content get ranked highly?
Search engines only show a snippet of the content and that always looks convincing. It's the whole content that is off and, unfortunately, a few seconds/minutes can pass before you realize it (If you ever do).
Well, and Google's proxy read of "quality" might have flawed assumptions. A concise page where you get what you need and leave quickly might read as "high bounce rate".
Bingo but i also think it is just the nature of the technology. It is going to be wordy but not usefully so.
Another hint is when the structure and formality of the response doesn’t match the medium. Like when someone sends you a whole article back in DMs along with headings for the sections.
Even though real humans write like that when writing documents, they never did that in informal messaging.
Since we're all so trusting of AI, maybe we can use AI to score how "excessively wordy" communications are, and pressure people to stop.
In my experience I'm pasting a lot more into AI to get the high level summary though.
And they are generating the longer version with AI, that you are then using AI to summarize.
This is not adding value for anyone except people whose function is to look busy, and people trying to avoid their busy work.
Yes, I don't find AI generated documents are useful, they just add a ton of fluff. but it's removable fluff at least was my point.
Put that way it's basically competitive evolutionary pressure to exhaust the context window of the other LLM.
That’s the funny thing is the only way to battle it is with more AI
In the future everyone will have a bot and our bots will just handle all interactions
This is happening at my place as well. I am a senior leader, but I find it hard to push back on this. I something looks plausible and everyone has reacted with a thumbs up (but probably only skimmed the document), when is the first one saying “what is this shit?”
The length itself is not an indicator per se, but you can sense when it is not honest. If others do not have a sense for it, it seems like complaining about something new.
Whenever I see a document with horizontal rules between headers and the blues and purples that Claude Cowork adds to .docx files, I sigh.
Whenever I see AI-generated content put forward for my attention, I extract myself from the situation with the minimum possible time expenditure from my side.
It's some sort of a leverage: "I spend 5 minutes prompting, so that you could spend 30 minutes reviewing". Not gonna happen LLM buddies.
If you were too lazy to write it, I'm too lazy to read it.
It's like an amplification attack.
>>The "elongation" of workplace artifacts resonated with me on such deep level.
Bulk of pretty much every thing is fluff. Not just work place artifacts.
In many ways this is the root of all complexity.
“Anything more than the truth would be too much.”
- Robert Frost