> "Rather than scanning for known patterns, Claude Code Security reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss."
Fascinating! Our team has been blending static code analysis and AI for a while and think it's a clever approach for the security use case the Anthropic team's targeting here.
That quote jumped out at me for a different reason... it's simply a falsehood. Claude code is built with an LLM which is a pattern-matching machine. While human researchers undoubtedly do some pattern matching, they also do a whole hell of a lot more than that. It's a ridiculous claim that their tool "reasons about your code the way a human would" because it's clearly wrong--we are not in fact running LLMs in our heads.
If this thing actually does something interesting, they're doing their best to hide that fact behind a steaming curtain of bullshit.
That's a fair point and agreed that human researchers certainly do more than just pattern match. I took it as sort of vision-y fluff and not literally, but do appreciate you calling that out more explicitly as being wrong.
It's all pattern matching. Your brain fools you into believing otherwise. All other humans (well not absolutely all) join in the delusion, confirming it as fact.
I suppose I should have been more specific--pattern matching in text. We humans do a lot more than processing ascii bytes (or whatever encoding you like) and looking for semantically nearby ones. If "only" because we have sensors which harvest more varied data than a 1D character stream. Security researchers may get an icky feeling if they notice something or another in some system they're analyzing, which leads eventually to something exploitable. Or they may beat their head against a problem all day at work on a Friday, go to the bar afterwards, wake up with a terrible hangover Saturday morning, go out to brunch, and while stepping off the bus on the way to the zoo after brunch an epiphany strikes like a flash and the exploit unfurls before them unbidden like a red carpet. LLMs do precisely none of this.