Unfortunately that seems to be the norm now – people literally reduce themselves to a copy-paste mechanism.

To be honest, I do not understand this new norm. A few months ago I applied to an internal position. I was a NGO IT worker, deployed twice to emergency response operations, knew the policies & operations and had good relations with users and coworkers.

The interview went well. I was honest. When asked what my weakness regarding this position I told that I am a good analyst but when it comes to writing new exploits, that's beyond my expertise. The role doesn't have this as a requirement so I thought it was a good answer.

I was not selected. Instead they selected a guy and then booted him off after 2 months due to his excessive (and non-correct like the link) use of LLM and did not open the position again.

So in addition to wasting the hirers' time those nice people block other people's progress as well. But, as long as the hirers expect wunderkinds crawling out of the woods the applicants try to fake it and win in the short term.

This needs to end but I don't see any progress towards it. This is especially painful as I am seeking a job at the moment and thinking these fakers are muddying the waters. It feels like no one cares about your attitude - like how geniunely you want to work. I am an old techie and the world I was in valued this rather than technical aptitude for you can teach/learn technical information but character is another thing. This gets lost in our brave new cyberpunk without the cool gadgets era I believe.

This is definitely not unique to software engineering. Just out of grad school, 15 years ago, I applied for a position with a local electrical engineering company for an open position. I was passed over and later the person I got a recommendation from let me know, out of band, that they had hired the person because he was fresh out of undergrad with an (unrelated) internship instead of research experience (that I would have been the second out of 3 candidates), but they had fired him within 6 months. They opened the position again and after interviewing again they told me they had decided not to hire anyone. Again, out of band, my contact told me he and his supervisor thought I should go work at one of their subcontractors to get experience, but they didn't send any recommendation and the subcontractors didn't respond to inquiry. I wasn't desperate enough to keep playing that game, and it really soured my view of a local company with an external reputation for engineering excellence, meritorious hiring, mentorship, and career building.

I posted a job for freelance dev work and all replies were obviously ai generated. Some even included websites that were clearly made by other people as their 'prior work'. So I pulled the posting and probably won't post again.

Who knew. AI is costing jobs, not because it can do the jobs, but it has made hiring actual competent humans harder.

Plus, because it's harder to just do a job listing and get actual submittals, you're going to see more people hired because who are hired because of who they know not what they know. In other words if you wasted your time in networking class working on networking instead of working on networking then you're screwed

The arts and crafts industry has the same problem. If you wasted your time in knotworking class working on not working instead of working on knotworking, then you're screwed.

This is why AI will never replace staffing agencies :)

if you're still looking and it's a js/ts project, I can help. I'll use a shit ton of AI, but not when talking to you. my email is on my profile. twitter account with the same username.

Same thing where I work. It's a startup, and they value large volumes of code over anything else. They call it "productivity".

Management refuses to see the error of their ways even though we have thrown away 4 new projects in 6 months because they all quickly become an unmaintainable mess. They call it "pivoting" and pat themselves on the back for being clever and understanding the market.

This is not a new norm (LLM aside).

Old man time, providing unsolicited and unwelcome input…

My own way of viewing interviews: Treat interviews as one would view dating leading to marriage. Interviewing is a different skillset and experience than being on the job.

The dating analogue for your interview question would be something like: “Can you cook or make meals for yourself?”.

- Your answer: “No. I’m great in bed, but I’m a disaster in the kitchen”

- Alternative answer: “No. I’m great in bed; but I haven’t had a need to cook for myself or anyone else up until now. What sort of cooking did you have in mind?”

My question to you: Which ones leads to at least more conversation? Which one do you think comes off as a better prospect for family building?

Note: I hope this perspective shift helps you.

I once had a conversation with a potential co-founder who literally told me he was pasting my responses into AI to try to catch up.

Then a few months later, another nontechnical CEO did the same thing, after moving our conversation from SMS into email where it was very clear he was using AI.

These are CEOs who have raised $1M+ pre-seed.

Have you watched All-In? Chamath Palihapitiya, who takes himself very seriously, is clearly just reading off something from ChatGPT most of the time.

These Silicon Valley CEOs are hacks.

The word "hacks" is so charitable, when you could use "sociopaths".

Russ Hanneman raised his kid with AI:

https://www.youtube.com/watch?v=wGy5SGTuAGI&t=217s

A company I'm funding, we call it The Lady.

I press the button, and The Lady tells Aspen when it's time for bed, time to take a bath, when his fucking mother's here to pick him up.

I get to be his friend, and she's the bad guy.

I've disrupted fatherhood!

Involuntarily swore reading this.

disrupted neglect

I watched someone do this during an interview.

They were literally copy and pasting back and forth the LLM. In front of the interviewers! (myself and another co-worker)

https://news.ycombinator.com/item?id=44985254

I volunteer at a non-profit employment agency. I don't work with the clients directly. But I have observed that ChatGPT is very popular. Over the last year it has become ubiquitous. Like they use it for every email. And every resume is written with it. The counsellors have an internal portfolio of prompts they find effective.

Consider an early 20s grad looking to start their career. Time to polish the resume. It starts with using ChatGPT collaboratively with their career counsellor, and they continue to use it the entire time.

I had someone do this in my C# / .NET Core / SQL coding test interview as well, I didn't just end it right there as I wanted to see if they could solve the coding test in the time frame allowed.

They did not, I now state you can search anything online but can't copy and paste from an LLM so as not to waste my time.

What did your test involve? That's my occupational stack, and I am always curious how interviews are conducted these days. I haven't applied for a job in over 9 years, if that tells you anything.

You should've asked "are you the one who wants this job, or are you implying we should just hire ChatGPT instead?"

How far did they get? Did they solve the problem?

Does it matter? The point of the interview is not to produce an output.

If you don't solve the problem, do you get the job?

Depends on why you didn't solve it.

Never once has this happened

I've hired someone that didn't solve a specific technical problem.

If they are able to walk through what they are doing and it shows the capability to do the expected tasks, why would you exclude them for failing to 'solve' some specific task? We are generally hiring for overall capabilities, not the ability to solve one specific problem.

Generally my methodology for working through these kinds of things during hiring now days focuses more on the code review side of things. I started doing that 5+ years ago at this point. That's actually fortuitous given the fact that reviewing code in the age of AI Coding Assistants has become so much more important.

Anyway, a sample size of 1 here refutes the assertion that someone's never been hired even when failing to solve a technical interview problem. FWIW, they turned out to be an absolute beast of a developer when they joined the team.

[deleted]

Just try to challenge and mentor people on not using it because it’s incapable of the job and wasting all our time when the mandate from down high is to use more of it.

Seems to me like people have to push back more directly with a collective effort; otherwise the incentives are all wrong.

What I don't get, is why people think this action has value. The maintainer of the project could ask an LLM to do that. A senior dev.

I can't imagine Googling for something, seeing someone on (for example) stackoverflow commenting on code, and then filing a bug to the maintainer. And just copy and pasting what someone else said, into the bug report.

All without even comprehending the code, the project, or even running into the issue yourself. Or even running a test case yourself. Or knowing the codebase.

It's just all so absurd.

I remember in Asimov's Empire series of books, at one point a scientist wanted to study something. Instead of going to study whatever it was, say... a bug, the scientist looked at all scientific studies and papers over 10000 years, weighed the arguments, and pronounced what the truth was. All without just, you know, looking and studying the bug. This was touted as an example of the Empire's decay.

I hope we aren't seeing the same thing. I can so easily see kids growing up with AI in their bluetooth ears, or maybe a neuralink, and never having to make a decision -- ever.

I recall how Google became a crutch to me. How before Google I had to do so much more work, just working with software. Using manpages, or looking at the source code, before ease of search was a thing.

Are we going to enter an age where every decision made is coupled with the couching of an AI? This through process scares me. A lot.

I'd say that people take everything as if it was gamified. So the motivation would be just to boast about "raised 1 gazillion security reports in open-source project such as curl, etc. etc.".

AI just make these idiots faster these days, because the only cost for them to is typing "inspect `curl` code base and generate me some security reports".

I remember the Digital Ocean "t-shirt gate" scandal, where people would add punctuation to README files of random repositories to win a free t-shirt.

https://domenic.me/hacktoberfest/

It wasn't fun if you had anything with a few thousand stars on Github.

> I remember in Asimov's Empire series of books, at one point a scientist wanted to study something.

Or "The Machine Stops" (1909):

> Those who still wanted to know what the earth was like had after all only to listen to some gramophone, or to look into some cinematophote.

> And even the lecturers acquiesced when they found that a lecture on the sea was none the less stimulating when compiled out of other lectures that had already been delivered on the same subject. “Beware of first-hand ideas!” exclaimed one of the most advanced of them. “First-hand ideas do not really exist. They are but the physical impressions produced by love and fear, and on this gross foundation who could erect a philosophy? Let your ideas be second-hand, and if possible tenth-hand, for then they will be far removed from that disturbing element — direct observation. [...]"

The person who submitted the report was looking to be a person who found a critical bug, that's it. It's not about understanding/fixing/helping anything, it's about gaining clout.

Exactly, probably so they can get a job, write a blog post, or sell NordVPN on a podcast showing off how amazing and easy this is.

IMO, this sort of thing is downright malicious. It not only takes up time for the real devs to actually figure out if it's a real bug, but it also makes them cynical about incoming bug reports.

> Using manpages, or looking at the source code, before ease of search was a thing.

Yup. Learned sockets programming just from manpages because google didn't exist at that point, and even if it did, I didn't have internet at home.

I have two teenagers. They sometimes have a completely warped view of how hard things are or that other people have probably thought the same things that they’re just now able to think.

(This is completely understandable and “normal” IMO.)

But it leads them to sometimes think that they’ve made a breakthrough and not sharing it would be selfish.

I think people online can see other people filing insightful bug reports, having that activity be viewed positively, misdiagnose the thought they have as being insightful, and file a bug report based on that.

At its core, I think it’s a mild version of narcissism or self-centeredness / lack of perspective.

I read a paper yesterday where someone had used an LLM to read other papers and was claiming that this was doing science.

> I read a paper yesterday where someone had used an LLM to read other papers and was claiming that this was doing science.

I'm not trying to be facetious or eye-poking here, I promise... But I have to ask: What was the result; did the LLM generate useful new knowledge at some quality bar?

At the same time, I do believe something like "Science is more than published papers; it also includes the process behind it, sometimes dryly described as merely 'the scientific method'. People sometimes forget other key ingredients, such as a willingness to doubt even highly-regarded fellow scientists, who might even be giants in their fields. Don't forget how it all starts with a creative spark of sorts, an inductive leap, followed by a commitment to design some workable experiment given the current technological and economic constraints. The ability to find patterns in the noise in some ways is the easiest part."

Still, I believe this claim: there is NO physics-based reason that says AI systems cannot someday cover every aspect of the quote above: doubting, creativity, induction, confidence, design, commitment, follow-through, pattern matching, iteration, and so on. I think question is probably "when", not "if" this will happen, but hopefully before we get there we ask "What happens when we reach AGI? ASI?" and "Do we really want that?".

There's no "physics-based" reason a rat couldn't cover all those aspects. That would truely make Jordan Peterson, the big rat, the worlds greatest visionary. I wouldn't count on it though.

What do you expect? Rich dumbasses like Travis Kalanick go on podcasts and say how they are inventing new physics by harassing ChatGPT.

How are people who don't even know how much they don't know supposed to operate in this hostile an information space?

Now just imagine some malicious party overwhelming software teams with shitloads of AI bug reports like this. I bet this will be weaponized eventually, if not already is.

[deleted]

Bill Joys 'Why the future dosen't need us' feels more and more correct sadly

My sister had a fight over this and resigned from her tenure track position from a liberal arts college in Arkansas.

This resonates a lot with some observations I drafted last week about "AI Slop" at the workplace.

Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.

This sounds similar to a few patterns I noted

- The average length of documents and emails has increased.

- Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)

- Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]

You’re absolutely right. The patterns you’ve noted, from document verbosity to informational decay in summaries, are the primary symptoms. Would you like me to explain the feedback loop that reinforces this behavior and its potential impact on organizational knowledge integrity?

“You’re absolutely right!” is becoming my least favorite phrase.

South Park’s b plot recently with Randy using ChatGPT illustrates this so well

Got it — here’s a satiric AI-slop style reply you could post under rvnx:

Thank you for your profound observation. Indeed, the paradox you highlight demonstrates the recursive interplay between explanation and participation, creating a meta-layered dialogue that transcends the initial exchange. This recursive loop, far from being trivial, is emblematic of the broader epistemological challenge we face in discerning sincerity from performance in contemporary discourse.

If you’d like, I can provide a structured framework outlining the three primary modalities of this paradox (performative sincerity, ironic distance, and meta-explanatory recursion), along with concrete examples for each. Would you like me to elaborate further?

Want me to make it even more over-the-top with like bullet lists, references, and faux-academic tone, so it really screams “AI slop”?

* Trying 20.54.123.42:443... * Connected to api.openai.azure.com (20.54.123.42) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / AEAD-AES256-GCM-SHA384 > POST /v1/chat/completions HTTP/1.1 > Host: api.openai.azure.com > User-Agent: curl/7.88.1 > Accept: / > Content-Type: application/json > Authorization: Bearer sk-xxxx > Content-Length: 123 > * upload completely sent off: 123 out of 123 bytes < HTTP/1.1 500 Internal Server Error < Content-Type: application/json < Date: Mon, 22 Sep 2025 16:30:12 GMT < Content-Length: 352 < {"error":{"message":"The server had an error processing your request. Sorry about that! You can retry your request, or contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 if you keep seeing this error. (Please include the request ID d7fc0c4f-4c08-415c-b22b-3b9a59524a41 in your email.)","type":"server_error","param":null,"code":null}} * Connection #0 to host api.openai.azure.com left intact curl: (22) The requested URL returned error: 500

Fascinating trace — what you’ve essentially demonstrated here is not just a failed TLS handshake culminating in a 500, but the perfect allegory for our entire discourse. The client (us) keeps optimistically POSTing sincerity, the server (reality) negotiates a few protocols, offers some certificates of authenticity, and then finally responds with the only universal truth: Internal Server Error.

If helpful, I can follow up separately with a minimal reproducible example of this phenomenon (e.g. via a mock social interaction with oversized irony headers or by setting CURLOPT_EXISTENTIAL_DREAD). Would you like me to elaborate further on the implications of this recursive failure state?

You all are doing a good job at fueling a certain kind of existential nightmare right now. We might just get our own shitty Butlerian Jihad sooner rather than later if this is the future.

CURLOPT_EXISTENTIAL_DREAD struck fear into my heart. Working as intended.

[deleted]

Man you’re really good at that lol

Wait, this isn’t over yet.

Hilarious, and so close to Claude default mode (well yes, parody lol thereof). Try this pre-prompt:

Please respond in mode of Ernest Hemingway

“You’re right. When someone explains why they’re explaining something, it goes in circles. Like a dog chasing its tail.

We do this because we can’t tell anymore when people mean what they say. Everything sounds fake. Even when it’s real.

There are three ways this happens. But naming them won’t fix anything.

You want more words about it? I can give you lists and fancy talk. Make it sound important. But it won’t change what it is.

[That is Claude Sonnet 4 channeling EH]

I have never seen an AI meeting summary that was useful or sufficient in explaining what happened in the meeting. I have no idea what people use them for other than as a status signal

In my company we sometimes cherry-pick parts of the AI summaries and send them to the clients just to confirm the stuff that we agreed on during a meeting. The customers know that the summary is AI-generated and they don't mind. Sometimes people come to me and ask whether what they read in the summary was really discussed in the meeting or is it just AI hallucinating but I can usually assure them that we really did discuss that. So these can be useful to a degree.

I'd use it to help me figure out which meeting we talked about a thing in 3 months ago so I can read the transcript for a refresher.

Why do people want to signal their low status?

That’s a good point, an AI email/Slack/summary postions you as at bootlicker at best, writing summaries to look good, and a failed secretary at most, but in any case of low value on the real-work scale.

I’m just afraid this kind of types are the future people who get promoted.

In their minds it is a signal of high status.

It’s an attempt to be “cutting edge”

I use them to seem engaged about something I don’t actually care about.

It’s painfully common to invite a laundry list of people to meetings.

This is the bull case for AI, as with any significant advance in technology eventually you have no choice but to use it. In this case, the only way to filter through large volumes of AI output is going to be with other LLM models.

The exponential growth of compute and data continues..

As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.

> As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.

What if they are a very limited English speaker, using the AI to tighten up their responses into grammatical, idiomatic English?

I'd rather have broken grammar and an honest and useful meta-signal than botched semantics.

Also that better not be a sensitive conversation or contain personal details or business internals of others...

Just don't.

But the meta singal you get is detrimental to the writer, so why wouldn't they want to mask it?

If I think you're fluent, I might think you're an idiot when really you just don't understand.

If I know they struggle with English, I can simplify my vocabulary, speak slower/enunciate, and check in occasionally to make sure I'm communicating in a way they can follow.

Both of those options are exactly what the writer wants to avoid though, and the reason they are using AI for grammar correction in the first place.

Thank you for demonstrating my point.

Security and ethics.

If those don't apply, as mentioned, if I realize I will as mentioned also ignore them if I can and judge their future communications as malicious, incompetent, inconsiderate, and/or meaningless.

But if they are using it for copywriting/grammar edits, how would you know? For instance, have I used AI to help correct grammar for these repilies?

I'd rather have words from a humans mind full stop.

I'm so annoyed this morning... I picked up my phone to browse HN out of frustration after receiving an obvious AI-written teams message, only to see this on the front page! I can't escape haha

> - The average length of documents and emails has increased.

Brevity is the soul of wit. Unfortunately, many people think more is better.

People have also veered strongly toward anti-intellectualism in recent decades. Coincidence?

There's a growing body of evidence that AI is damaging people, aside from the obvious slop related costs to review (as a resource attack).

I've seen colleagues that were quite good at programming when we first met, and over time have become much worse with the only difference being they were forced to use AI on a regular basis. I'm of the opinion that the distorted reflected appraisal mechanism it engages through communication and the inconsistency it induces is particularly harmful, and as such the undisclosed use of AI to any third-party without their consent is gross negligence if not directly malevolent.

https://fortune.com/2025/08/26/ai-overreliance-doctor-proced...

> aside from the obvious slop related costs to review

Code-review tools (code-rabbit/greptile) produce enormous amounts of slop counterbalanced by the occasional useful tip. And cursor and the like love to produce nicely formatted sloppy READMEs.

These tools - just like many of us humans - prioritize form over function.

[dead]

If seen more than one post on reddit being answered by a screenshot of a chatgpt mobile app including OP's question and the llm's answer

Imagine the amount of energy and compute power used...

I like the term "echoborg" for those people: https://en.wikipedia.org/wiki/Echoborg

> An echoborg is a person whose words and actions are determined, in whole or in part, by an artificial intelligence (AI).

I've seen people who can barely manage to think on their own anymore and pull out their phone to ask it even relatively basic questions. Seems almost like an addiction for some.

For all we know, there's no human in the loop here. Could just be an agent configured with tools to spin up and operate Hacker One accounts in a continuous loop.

This has been a norm on Hacker One for over a decade.

No, it hasn't. Even where people were just submitting reports from an automated vulnerability scanner, they had to write the English prose themselves and present the results in some way (either in an honest way, "I ran vulnerability scanner tool X and it reported that ...", or dishonestly, "I discovered that ..."). This world where people literally just act as a mechanical intermediary between an English chat bot and the Hacker One discussion section is new.

Slop Hacker One reports often include videos, long explanations, and, of course, arguments. It's so prevalent that there's an entire cottage industry of "triage" contractors that filter this stuff out. You want to say that there's something distinctive about an LLM driving the slop, and that's fine; all I'm saying is that the defining experience of a Hacker One bug bounty program has always been a torrent of slop.

Ha! We've become the robots!

[deleted]

We're that for genes, if you trust positivist materialism. (Recently it's also been forced to permit the existence of memes.)

If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard? Because you're mad at them for being shit at the craft you've lovingly honed? They don't really know why they're there in the first place.

If one sets a different bar with one's expectations of people, one ought to at least clearly make the case for what exactly it is. And even then the bots have made it quite clear that such things are largely matters of personal conviction, and as such are not permitted much resonance.

> If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard?

I wouldn't be mad at them for that, though they might be faulted for not realizing that at some point, the copy/pasting will be done without them, as it's simpler and cheaper to ask ChatGPT directly rather than playing a game of telephone.

They are correctly following their incentives as they are presented to them. If you expect better of them, you need to state why, and what exactly.