That quote jumped out at me for a different reason... it's simply a falsehood. Claude code is built with an LLM which is a pattern-matching machine. While human researchers undoubtedly do some pattern matching, they also do a whole hell of a lot more than that. It's a ridiculous claim that their tool "reasons about your code the way a human would" because it's clearly wrong--we are not in fact running LLMs in our heads.

If this thing actually does something interesting, they're doing their best to hide that fact behind a steaming curtain of bullshit.

That's a fair point and agreed that human researchers certainly do more than just pattern match. I took it as sort of vision-y fluff and not literally, but do appreciate you calling that out more explicitly as being wrong.

It's all pattern matching. Your brain fools you into believing otherwise. All other humans (well not absolutely all) join in the delusion, confirming it as fact.

I suppose I should have been more specific--pattern matching in text. We humans do a lot more than processing ascii bytes (or whatever encoding you like) and looking for semantically nearby ones. If "only" because we have sensors which harvest more varied data than a 1D character stream. Security researchers may get an icky feeling if they notice something or another in some system they're analyzing, which leads eventually to something exploitable. Or they may beat their head against a problem all day at work on a Friday, go to the bar afterwards, wake up with a terrible hangover Saturday morning, go out to brunch, and while stepping off the bus on the way to the zoo after brunch an epiphany strikes like a flash and the exploit unfurls before them unbidden like a red carpet. LLMs do precisely none of this. And then we can go into their deficiencies--incapable of metacognition, incapable of memory, incapable of reasoning (despite the marketing jargon), incapable of determining factual accuracy, incapable of estimating uncertainty, ...

I won't argue whether their "human-like" marketing is dumb but I will argue that whatever LLM's are doing is plenty sufficient to find the vast majority of vulnerabilities. Don't tell my employer I said that though

That's awesome, and I'd love to see a whole bunch of data backing it up. If I was in a position to buy a product to do vuln scanning, and somebody showed me convincing evidence that this machine does the job.. you got a deal. I can't imagine why they didn't do that, if indeed it works.