These are largely friends and peers, so they ultimately own their own risks. But I'm not saying it is good or bad. I'm just telling you what is happening in the real world. Every senior person I know, whether a high tech exec or a solo coffee bean importer, is vibing to some degree. Some will be more successful than others.

I've been working in tech since the late 90s. This is the biggest and most sudden change in company behavior I've ever seen. The only thing that comes close was the web 1.0 world in the 90s where everything suddenly became websites.

That creates tons of risks and opportunities. Good and bad. Maybe a great time to start a security company. But maybe a terrible time to be a small time web app developer when your clients can get 'good enough' in minutes for dollars on their own.

saying "every X i know" in all your comments is a bit ridiculous.

You comments read like reddit clickbait. How many of these executives/senior/coffee bean/whatever ppl do you even know and why you the one enlightening them with claude cowork ? . "Every X i know" sounds like a large sample size. Make ridiculous claims by prefixing " every X i know" .

I feel so angry at this linkedin speak. so infuriating. Hate that we've accepted these ppl without any pushback.

Hate it all you want but it’s a reality in this case. There’s a reason big consulting firms are making huge pivot to AI consulting. Everyone in the business world is doing this and trying to find value with AI. I’m a CFO and network regularly with other executives, board members who also are board members at other companies, investors, people who see a combined large population of companies and I’ve not spoken to a single person in the last year that isn’t adopting AI themselves for their own uses but also has AI strategy as company goal for current and into next year at least. When a trend catches fire like this the “everyone I know” speak is absolutely framing that context.

How many of those people, including yourself, actually understand what the technology is, what the risk factors are relative to your existing contracts/obligations, and how what you are doing with the technology interacts with the aforementioned questions.

I say this as someone who deals with sales/CRO/CFO functions quite regulary, I have to tell everyone that uploading contracts to Claude and/or ChatGPT does not hold confidentiality because files are not covered under enterprise ZDRs. [0] [1]

It comes down to 'everyone else is doing it' without an understanding of why, then past that, the what of how that applies to the specific business to find the unique value of AI to an organization that does not touch external networks.

Please give your GC the links below, let them look over your contracts and obligations to ensure you aren't exposing risk for no real reason other than saving a couple seconds for something that a SDR/BDR level employee could do.

[0] https://code.claude.com/docs/en/zero-data-retention#what-zdr...

[1] https://developers.openai.com/api/docs/guides/your-data#zero...

Most people don’t understand the tech but they understand it involves moving data into a cloud service like Anthropic and may have risk or breach associated. I think people are generally deciding to take that risk. Executives decide to take these kinds of risks all the time. Our GC would inform us of the risk and we would say “thank you for flagging the concern but let’s proceed anyway.” This is going to vary in all companies and industries of course. Healthcare needs to be careful of hippa and there’s pii concerns as well. But generally, everyone feels brazen enough to go forward. I do hear what you’re saying though, have had several talks with our GC and they simply can’t keep up with the pace and the business isn’t so risk adverse we’d put the breaks on AI due to said risk. That said, we do have many things that eventually get treated as a POC to eventually build out an internal AI tool for to reduce the risks.

It’s an interesting time.

i am not hating ai or whatever. I am hating how every interaction now is some ridiculous clickbait format like "every X i know" type shit.

If its so obvious that everyone is doing it then you dont need "every executive i know takes a shit" .

every interaction is now laced with ulterior motives like op trying to pitch himself as ai expert to sell his courses or whatever. He is apparently going around blowing executives minds with claude cowork. so ridiculous.

>But I'm not saying it is good or bad.

Wait, you exposed people to a technology, taught them how to use it, then you are not going to own the implications of that action without teaching them about the risks or telling them how they need to ensure they don't shoot themselves in the face or violate their duty of care?

Do you understand what you are saying and the implications of that in the real world relative to the insurance contracts that they have?

Your company is associated with HIPAA, you should have a much higher standard than this.

Play the ball, not the man, dude. Hectoring people on the Internet because you're stressed out about something isn't going to magically fix how you feel. Digging into their profile to make it personal is three steps too far.

We are talking about one person's introduction of a technology to persons and the implications of that action within the framework of enterprise governance and risk, it is one in the same. If anything, who a person is, their knowledge of the domain and the associated implications that action has on the domain has relevancy where someone who is ignorant of implications may have more grace than someone who has the experience to know better. The passive lack of accountability or responsibility relative to that does matter given the context.

I think the one thing you are not taking into account is that the investors on average fundamentally don’t care. Scale arbitrage means that small companies are fundamentally about velocity - and if they get sued due to regulations that do not pierce the corporate veil, they just fold. And the ones that did not get sued make money for the vc. And figure out later how to be hipaa etc compliant. Basically, I’ve been seeing over the last 10 years VCs are not caring about insurance or corporate liability - sink rate is so high it is irrelevant.

For big corps - this is different. But modulo hipaa - this is why they are gung ho hi about binding arbitration - they are trying to match velocity to some degree - and mostly failing…

VCs and investors are a massive issue, which is ironic saying that here, but once you get into contracts with other businesses, it changes things for the business and the leadership within who do carry liability when things go wrong, especially when they have made attestations.

What we are talking about is the conclusion you leapt to from 20 seconds of looking for evidence to suit a conclusion. Nothing in their comment "These are largely friends and peers, so they ultimately own their own risks" insists these are all people working in or on healthcare. Friends could be ... friends? Like the kind outside of work. And if someone is a peer (again, we have to assume the "at work" part), there isn't much you can do to prevent them from doing what they will. Educating them about trigger safety may be the best thing you can do.

>Every executive/leader I've shown Claude Cowork to has gone from 'what is AI' to 'vibecoding whole apps' in weeks. [0]

I think this is where we have the issue in my tone and approach to my comments. My response was based off of the OP stating that the people who they were introduction were 'executives/leaders' and not 'friends', which has a very different connotation when it comes to information security, liability, responsibility, accountability, and ownership. It was only in their response to my question about risk ownership that they described the persons as friends.

If they had said 'friends' from the very beginning, instead of 'executive/leader' I would not have had the reaction than I did. The reason why I brought up HIPAA was because of 'executive/leader', since the idea of duty of care extends to leadership within any organization, especially those who are involved with healthcare, which they know based off of their company.

[0] https://news.ycombinator.com/item?id=48131968

But even your pullquote insists on begging the question. No one said "Every executive/ leader at my place of business who does nothing except work with PII data all day", you presumed it.

>"I’m a CFO and network regularly with other executives, board members who also are board members at other companies, investors, people who see a combined large population of companies"

I have already addressed this elsewhere. [0]

The call to HIPAA wasn't about PII, it was about knowledge around standards and regulations such as HIPAA when it comes to application/information/network security is just baked in. Which is why the passivity around the statement made no sense given the risks/obligations/liability associated with vibe coding applications at the executive level, which someone who's company deals with HIPAA should understand and appreciate.

Never have I said that, and please quote me word-for-word otherwise, what I said applied to "very executive/ leader at my place of business who does nothing except work with PII data all day", that is a windmill you created yourself.

You can keep tilting at the windmill.

[0] https://news.ycombinator.com/threads?id=Ucalegon#48133230

Stop digging.

I am not digging, I am being consistent.

But I appreciate you trying to police the expression of my deeply held beliefs, but, like, nope!

You have to understand that people like you, that you that keep talking about enterprise governance and risk, should facilitate business users to do these things securely. This should have always been the case but somehow it has ended up more with restricting rather than facilitating. Hopefully tools like claude code will prove the value add more easily, changing everything I hate about corp IT.

I appreciate the feeling but this isn't so much driven by principle but by business risk through contract liability or other liability that exists within whatever place you happen to be doing business.

'Adding value' is a very interesting statement and way to judge the worth of something. Adding value to who? And if that value add also causes massive harms, how do we reconcile that? So you build a brand new app with does all of the things that all of your total addressable market wants, but it also exposes all of the IP your existing clients, does that mean you will be able to achieve that TAM?

Corp IT does not exist in a vacuum. Understanding the why of that isn't a 'you should just accept this' but more 'how can we make this better and avoid mistakes already made by others'. I will always point to aviation and 'bold text is written in blood' as a great model to understand all of this not as a blocker but, instead, as a building block.

[deleted]

There is no way to facilitate untrained users in the healthcare space to vibe code real applications touching patient data. There is no magic policy, firewall, or "facilitation technique" which can make vibe coded software reliably meet contractual and regulatory obligations with a high degree of security in the healthcare space.

If you care about data privacy, especially your own protected health information, that sentence should give you a lot of comfort.

In a HIPAA environment, people who are sufficiently trained on how to develop regulated software securely are called "software engineers".

In my opinion, agents will replace the majority of the rest of businesses before they are good enough at agentic engineering to be able to autonomously develop software that safely and reliably can manage PHI without a single mistake.

It goes without saying: never trust your PHI to any company who is vibe coding in production.

You guys have jumped to so many conclusions it’s amazing.

[deleted]

You are assuming like 12 things that aren't true in this response.

Explicitly name them then.