OK, I have no idea who you are, and this isn't personal, I'm responding to a comment and not a person --- but this is an argument that posits that one of the big problems with LLM software is "SOC2 audits". Since SOC2 audits are basically not a meaningful thing, I'm left wondering if the rest of your argument is similarly poorly supported.

It feels like a dunk to write that. But I genuinely do think there's so much motivated reasoning on both sides of this issue, and one signal of that is when people tip their hands like this.

No offense taken.

I was going to argue that companies got to choose their own auditors, so of course there were some bad ones out there. But looking at the market, it seems like (1) the race to the bottom has gotten ridiculous, and (2) the insurance companies do not currently trust the auditors in any meaningful way. So, yeah, point to you.

Once upon a time, I went through SOC2 audits where the auditors asked lots of questions about Vault and really tried to understand how credentials got handled. Sure, that was exceptional even at the time.

But that still leaves a whole pile of other audits and regulatory frameworks I need to comply with. Probably most of these frameworks will eventually accept "The code was written by an LLM and reviewed by an actual programmer." I am less certain that you'll be able to get away with vibe coding regulated systems any time soon.

SOC2 has never been about software resilience. You can create a set of attestations that will require you to present evidence to your auditors (who are ~accountants and will not know what the dotted quads of an IP address mean) about software quality, but there is no reason to do that and most organizations don't. SOC2 cares a great deal more about access management (in the "plotting on spreadsheet" sense) than it does about vulnerabilities.

My thing here is: you want to summon some kind of deus ex machina reason why the unpredictability (say) of agent-generated software will fail in the real world, but the concrete one you came up with fails to make that argument, pretty abruptly. Which makes me think the argument is less about the world as it is and more about the world as you'd hope it would be, if that makes sense.

Since when are SOC audits not a meaningful thing?

If soc audits are driving your development process you are doing it backwards. And _certainly_ a time is coming when just using the llm will be soc compliant.

I’d think any company big enough or working in certain markets which has a Compliance Officer cares about this; regulations are a legitimate business risk, and software integration contracts have security control compliance requirements which very much impact the sdlc.

Would you have the same reaction to requiring an approval for a production deployment? That’s driving the development process.

—-

Also jfc I need to cool it with the buzzwords, sorry I just got home from “talk like this all day” $job

SOC2 is generally regarded as a joke and has in fact almost nothing to do with software resilience even on its own terms.

A joke or not, a lot of organizations take SOC compliance and auditing seriously. Responding to someone requiring it with “who cares, the accountants doing the audits don't know anything anyway” is unlikely to go well.

I'm intimately familiar with SOC2 and I'm telling you it has practically nothing to do with software security and to the extent it does, the story is improved starkly and mechanically by agents. That's an outcome of how superficial SOC2 is, not a statement about how good agent code is.

Of course, the reality is that competent orgs generally exclude virtually all their software from their audit scope, and it would be a mark of incompetence to loop tooling-grade or line-of-business backoffice code into it. But even if you were crazy enough to do that, agents would improve your outcome.

Anybody claiming that SOC2 is a reason agent-based code will falter is talking about the world as they want it to be, not as it is.

Your word for “competent” seems to be my word for “irresponsible”. A failure in that “line-of-business backoffice code” is exactly the sort of thing that'd cause irreparable damage in terms of regulatory compliance (and, you know, the tangible harms those regulations are meant to prevent). An LLM hallucination introducing bugs that make ERP transactions spontaneously disappear or allow users to bypass permissions checks on sensitive documents is the sort of thing that's catastrophic for any business that's not actually just a money laundering front (and hell, even then). Maybe you trust agentic AI to make fewer mistakes than humans, but I sure don't.

Like, I'm trying to avoid hyperbole here, but you're advocating for a wild-west sort of attitude that can, will, and has gotten people severely defrauded or outright injured/killed. And I know you know better than this because you've written at length about what it took to achieve SOC compliance at your current employer.

I believe that if you read what I wrote about that you'll see it's consistent with what I'm saying here.