Hate it all you want but it’s a reality in this case. There’s a reason big consulting firms are making huge pivot to AI consulting. Everyone in the business world is doing this and trying to find value with AI. I’m a CFO and network regularly with other executives, board members who also are board members at other companies, investors, people who see a combined large population of companies and I’ve not spoken to a single person in the last year that isn’t adopting AI themselves for their own uses but also has AI strategy as company goal for current and into next year at least. When a trend catches fire like this the “everyone I know” speak is absolutely framing that context.
How many of those people, including yourself, actually understand what the technology is, what the risk factors are relative to your existing contracts/obligations, and how what you are doing with the technology interacts with the aforementioned questions.
I say this as someone who deals with sales/CRO/CFO functions quite regulary, I have to tell everyone that uploading contracts to Claude and/or ChatGPT does not hold confidentiality because files are not covered under enterprise ZDRs. [0] [1]
It comes down to 'everyone else is doing it' without an understanding of why, then past that, the what of how that applies to the specific business to find the unique value of AI to an organization that does not touch external networks.
Please give your GC the links below, let them look over your contracts and obligations to ensure you aren't exposing risk for no real reason other than saving a couple seconds for something that a SDR/BDR level employee could do.
[0] https://code.claude.com/docs/en/zero-data-retention#what-zdr...
[1] https://developers.openai.com/api/docs/guides/your-data#zero...
Most people don’t understand the tech but they understand it involves moving data into a cloud service like Anthropic and may have risk or breach associated. I think people are generally deciding to take that risk. Executives decide to take these kinds of risks all the time. Our GC would inform us of the risk and we would say “thank you for flagging the concern but let’s proceed anyway.” This is going to vary in all companies and industries of course. Healthcare needs to be careful of hippa and there’s pii concerns as well. But generally, everyone feels brazen enough to go forward. I do hear what you’re saying though, have had several talks with our GC and they simply can’t keep up with the pace and the business isn’t so risk adverse we’d put the breaks on AI due to said risk. That said, we do have many things that eventually get treated as a POC to eventually build out an internal AI tool for to reduce the risks.
It’s an interesting time.
i am not hating ai or whatever. I am hating how every interaction now is some ridiculous clickbait format like "every X i know" type shit.
If its so obvious that everyone is doing it then you dont need "every executive i know takes a shit" .
every interaction is now laced with ulterior motives like op trying to pitch himself as ai expert to sell his courses or whatever. He is apparently going around blowing executives minds with claude cowork. so ridiculous.