> that makes claude code or codex accessible to the average user
That's what they aim Claude Cowork at. Every executive/leader I've shown Claude Cowork to has gone from 'what is AI' to 'vibecoding whole apps' in weeks. Then when Claude is down for an hour, they get visibly angry and don't remember how to do anything pre-Claude :)
I understand the impulse to provide a UI to manage codebases, etc. But my observation is that these people just ask Claude to do whatever it is they need done. Codebase needs managing? They just ask Claude to do it. No idea how to deploy an app? They just ask Claude to do it.
Any app built on top of this stack to 'make it easier' is competing with 'I don't care what's happening, just ask Claude to do it'.
I have seen people just generate large docs with Claude cowork and they themselves have not scrutinized it or know why/how it's useful. It's just kind of impressive in its volume and well formatedness. And then they dump it in your lap as being helpful
This. The amount of long winded unedited docs people think it’s ok to dump on me now is unbelievable.
Organizational self-imposed DOS attacks.
It's ok, you can use AI to summarize the key points. No need to read anymore! /s
I wish. I have to fix all the bs before things get sent out. It’s so exhausting.
> And then they dump it in your lap as being helpful
I've been guilty of this and gotten pushback from my manager: "this feels like homework, cut these options down to 100 words each, max".
Curation and refinement are even more important when you can have genAI generate reams of text.
Seeking outside signals is even more important, like talking to customers, looking at real usage data, and more. It's too easy to trust believe what Claude tells you, even if you say "please argue against this idea", which you always should.
We often see this bizarre workflow where notes, like engineering notes, are converted to large prose using AI. And then, the large prose is converted back to short bullet points on the other end for summarization!
It's all fun and games until some high level executive realizes everyone is using it and still demanding the same paycheck.
So what? They produce more for the same money. What's not to like?
> They produce more for the same money.
The word "more" there is doing a lot of work.
What is the "more"? Is it:
- more documents and text or more understanding
- more code or more valuable features
- more things to throw against the wall or more considered experiments
It's way easier to do the first things instead of the second.
I know, I know. It was more a tounge in cheek comment to something I assume was irony as well. To be fair to GP, management doesn't get any of this...
Adding is always easier than reducing to the essence.
I find it interesting how herding agents has so much in common with being a team lead. Constant struggle between too detailed and too loose instructions. One difference though is that the team learns from you, but with agents it's only you who adapts. Saying that, because I don't count instructions or anything in the context window as adapting.
Yep, I've received a few powerpoints like that.
I'm using Claude to write large files too, but it's a very iterative process and involves a lot of reading and correcting.
I'm beginning to see this in my industry (consulting). I was at a client site last week and in a room with some heavy hitters both from my side and client side but in a casual setting (lunch). Everyone was discussing how they sometimes "cheat" using genAI to put together decks when one of the out-of-the-blue 1 sentence questions that takes 4 hours to answer come down from the c-suite. They all said they heavily edit the output but at least it gives them a place to start. I have my doubts though, i wonder how many times they just take it as gospel and forward the deck on.
to be fair, i've been guilty of this with code. Ask claude to generate a python script that takes X as input and produces Y as output, run it, pipe to more, output looks ok but i don't check everything, write it to a file, send it on.
We've really reached the point where one person uses AI to create an impressive report based on a few prompts with some keywords, and the receiver uses another AI to summarize the report to a short TL;DR that's almost identical to the input prompts.
This, creating order from chaos (reducing entropy) is difficult and requires real intelligence. Inflating some small prompt into a wall of text and creating a bunch of entropy in the process is not as useful as it appears.
"Simplicity is the ultimate sophistication". I'm more impressed by a pithy sentence than 100 pages of statistical fluff.
This reminds me of that game of telephone. Eventually, the message gets morphed and transformed into something different from what was originally said. Is this really what we want?
I'm a victim of this. Very bad taste of AI generated gibberish that was obviously not read through before being sent.
[flagged]
> Then when Claude is down for an hour, they get visibly angry and don't remember how to do anything pre-Claude :)
The drug is scary when everyone is depending on it. I wonder what is future like.
The future is perpetually dealing with the fallout from all the vibe coding as the pool of people who'd have a shot at fixing it gets smaller and smaller. Shitty will be the new normal.
I feel like it will be like going back to the 80s, when PCs became a norm and most programmers and hobbyists could code without the need of a University or a Corporation. Thousands of shareware apps you had to navigate, everyone trying to solve the same problems from different angles..
I do agree quality will be missed, and shadow IT will be again a big issue like at the end of the 80s and early 90s.
I imagine a much darker future when it’s almost every enterprise system known for stability is now unstable.
Planes falling out of the sky, trains crashing into each other, pacemakers downloading updates and freezing
Coding on 8 and 16 bit home computers still required some skills that most vibe coders certainly lack.
> most programmers and hobbyists could code without the need of a University or a Corporation.
I don't think so. Back then, the pool of people doing such a thing basically self-selected for intelligent, motivated types who were capable of learning on their own. The new "programmers" "programming" via Claude Code are going to be very different from those hobbyists you're talking about.
This is a comically self-absorbed perspective.
Why are people making things with Claude Code if not because they’re motivated?
I think the point is that you had to be deeply curious and more of a "hacker" or "computer nerd" type to be able to figure things out.
But I think the same applies to not just AI but various tools that have abstracted away the complexity of things over the years.
For example, I would imagine the average person deploying some sort of web app or API today knows far less about networking and infrastructure than someone doing it 10 to 20 years ago.
Yes, exactly. I'm reminded of the articles detailing how Gen Z has fewer computer skills than previous generations because computing has become so abstracted -- turn on iPhone, tap button. "What's a directory?" -- files just kind of exist on these devices without any real notion of _where_, as far as the user knows. Stuff like that.
Compare that to say 30ish years ago. If you wanted to do something as simple as play a computer game you had to know how to navigate a command line, know about device drivers, make a boot disk, etc. Users were a whole lot closer to the realities of what makes computing work. And no internet, at least as we know it now. You really had to have a certain mindset to be a developer.
It's a far cry from "hey Claude make an app."
Knowing that genuine, disincentivized creativity is exceedingly rare (especially in the West), you can assume that the answer looks something like a carrot or a stick.
Because it's "easy"?
Because it's "easy" (until they hit a wall)
Once they hit a wall, that is where you find out whether they are motivated or not
> Once they hit a wall, that is where you find out whether they are motivated or not
Yep. That has to happen first.
Eventually there will be an incident with bad software at a hospital or bank that leaves some people dead or broke.
Then regulators will take things seriously.
This is exactly what Uncle Bob predicted in his talk "The Future Of Programming" [0] 10 years ago, way before LLMs.
[0] https://www.youtube.com/watch?v=ecIWPzGEbFc
Which is why the medical device software industry is so heavily regulated after the Therac-25 incident. Oh, wait, it's not.
https://en.wikipedia.org/wiki/Therac-25
What regulators?
> as the pool of people who'd have a shot at fixing it gets smaller and smaller
Sounds like job prospects to me.
> Shitty will be the new normal.
I’ve heard the same from the best devs, and some who thought themselves to be the best, I’ve known long before LLMs were ever a thing.
I’m sure others heard the same when JavaScript and Python became near ubiquitous. When PHP emerged. When C supplanted Fortran and COBOL. When these two took over from Assembly. When punch cards went the way of the dodo.
There’s always someone for whom shitty is becoming the new normal. If that makes it a rule, what do we make of that rule?
There are different magnitudes of shitty.
Also we went from compilers with an IDE that had a debugger, profiler, built-in help and would fit on a 3.5" disk and would load on machines with 640KiB RAM (Turbo Pascal) to chat apps or password managers that are hundreds of megabytes and regularly gobble up more than a gigabyte of memory because they ship with their own browser.
Something is lost along the way.
> I’m sure others heard the same when JavaScript and Python became near ubiquitous. When PHP emerged.
You heard right! Most JavaScript and PHP in the world _is_ profoundly shitty. It's taken 20 years of intense research to make JavaScript compilers that are almost good enough to mostly optimize away the design foibles of the language.
To be fair with how powerful our computers are, it's a pity that electron apps like bitwarden, spotify are so slow and consume so much resources. I do miss the time when a lot of apps were snappy
I am just going to justify in the future that because of LLMs, there is no reason to use JavaScript, Java, Python etc anymore because of the available workforce. Only then when the technology itself is fit for the job.
As you say - "good enough" is always the normal.
Maybe it’s a process. Many of the transitions you mentioned did bring shitty apps (not all of them, the ones replacing tech for tech were mostly ok, the ones democratizing dev did come with a quality drop), but eventually Darwinism will take effect and trim the long tail.
Coding per se is not hard. Proper engineering is. I do hope this change brings a change in focus (people train in algorithms, efficiency, solid development patterns) but I am afraid it won’t be the case.
"With a punchcard at least, I can verify what the input is! Unlike those new 'transistors' that are so unreliable!"
What do you think a transistor is
I'm working on a possibly-quixotic tool to mitigate the "cognitive debt" from AI-assisted development. Not everybody agrees that this is a problem. Maybe some teams that are only writing specs and reviewing plans still understand their products adequately. If you have an opinion either way, I'd appreciate hearing from you.
I think there are some pretty good ways to understand it now.
When the electricity goes out, (most) people get similarly upset. No electricity means no internet, and all of a sudden everything that people had planed to do can’t be done until the power returns.
Same as anything else. It’ll go down sometimes, people will take a break and chat, then it will come back up.
Like Slack or GitHub or AWS or whatever. It’s almost always a net positive to wait vs do it yourself.
I'm more scared at everyone outsourcing their thinking to a private, for-profit company.
What could possibly go wrong.
Thinking, yes, but also secrets, access and effective control of important services in every country and company worldwide, centralized in the US (or anywhere else) where the NSA can take the driver's seat at any time. "AI" is the ultimate sleeper agent.
I have been saying things to this effect for a few years now, and have literally been laughed at. I feel like that guy that suggested that doctors should wash their hands before operating on patients -- they laughed at him too, before they put him in an asylum. What's going to happen, is that everyone who realizes that these policies are a mistake, is going to quietly retcon their own role in that mistake, while scapegoating everyone that they don't like.
Also, would bet money that the derived data from the meeting-summarizers is being sold to hedge-funds, to give them a bit of an edge.
> Also, would bet money that the derived data from the meeting-summarizers is being sold to hedge-funds, to give them a bit of an edge.
And if it isn't already, you can be that they're probably to start.
All those "difficult to program but easy-if-time-consuming-for-human" tasks, will 1000% be farmed out to models at unprecedented scales.
yeah. I mean, I think (as someone similar to you) the truth is not rewarded because we are in an age where deception as the norm is. Or maybe that's always how it was as humans, and we were simply too naive and gullible to notice before?
The incentives reward this kind of behavior. I wonder then how to operate in a world that is low of moral values and ethicality - does it mean I have to do so to have a fair shot? I'd like to think not.
I think the scenario was more of, if really everyone depends on claude, then better nothing critical(medical software, aviation, traffic controll ..) breaks while claude is offline.
The good thing is we've learned this already from cloud. When one AWS region is degraded we all failover to other regions, and then other cloud providers, right? ...right?
At least some of the projects in these industries now specify strict no-AI-use policies in contracts. I participate in a few of these, and it’s becoming a bit of a pain, because all dev tool vendors insist on adding AI features, and if there’s no way to turn them off completely we have to migrate away.
However, the temptation of productivity gains are strong, and few of the customers look into relaxing these rules.
What about when you work at Anthropic?
> The drug is scary when everyone is depending on it. I wonder what is future like.
I can't wait for a Hollywood blockbuster that'll pretty much be science non-fiction.
> wonder what is future like
Probably "don't do anything to upset AI companies or you will effectively become a handicapped person"
Not that different from life in China: "don't do anything to upset Tencent and AliPay or you will become an outcast"
Or life in the US if you're a content creator: "don't do anything to upset Meta or Youtube or you will not be able to pay your rent"
The future: ToS basically becomes law, and you will be stripped of your own second brain if you violate it or say anything they deem "sensitive"
Our future: https://www.youtube.com/watch?v=rNo5fs1iDrs
Full of security holes
Seems far less scary to me than, say, building an electrical grid in a cold climate, where if it fails for a few days people start to die. Oh wait...
Why would they die in cold climate? I would expect them to die in hot climate (no AC - heat stroke, no refrigerator - food poisoning), not the cold where they would have wood/gas heating.
Electricity is very predictable and not under control of one or two nations.
which is more likely when they start vibe-coding grid managers
It's the same, on steroids.
same was said about electricity.
Imagine what happens if computers stop working* and you have to go back to pen and paper for a few days.
* ransomware attack, fire in the server room, database HDD crash, car accident takes out the internet connection, ...
>Every executive/leader I've shown Claude Cowork to has gone from 'what is AI' to 'vibecoding whole apps' in weeks.
Do you, and those executives, own the risks associated with that practice? Are those risks actually indemnified?
Its neat that 'anyone can do anything' but if they don't actually know what the risk to business or 3rd parties, why is this a good thing, especially in the enterprise where there are actors who are explicitly looking for this type of environment to exploit?
These are largely friends and peers, so they ultimately own their own risks. But I'm not saying it is good or bad. I'm just telling you what is happening in the real world. Every senior person I know, whether a high tech exec or a solo coffee bean importer, is vibing to some degree. Some will be more successful than others.
I've been working in tech since the late 90s. This is the biggest and most sudden change in company behavior I've ever seen. The only thing that comes close was the web 1.0 world in the 90s where everything suddenly became websites.
That creates tons of risks and opportunities. Good and bad. Maybe a great time to start a security company. But maybe a terrible time to be a small time web app developer when your clients can get 'good enough' in minutes for dollars on their own.
saying "every X i know" in all your comments is a bit ridiculous.
You comments read like reddit clickbait. How many of these executives/senior/coffee bean/whatever ppl do you even know and why you the one enlightening them with claude cowork ? . "Every X i know" sounds like a large sample size. Make ridiculous claims by prefixing " every X i know" .
I feel so angry at this linkedin speak. so infuriating. Hate that we've accepted these ppl without any pushback.
Hate it all you want but it’s a reality in this case. There’s a reason big consulting firms are making huge pivot to AI consulting. Everyone in the business world is doing this and trying to find value with AI. I’m a CFO and network regularly with other executives, board members who also are board members at other companies, investors, people who see a combined large population of companies and I’ve not spoken to a single person in the last year that isn’t adopting AI themselves for their own uses but also has AI strategy as company goal for current and into next year at least. When a trend catches fire like this the “everyone I know” speak is absolutely framing that context.
How many of those people, including yourself, actually understand what the technology is, what the risk factors are relative to your existing contracts/obligations, and how what you are doing with the technology interacts with the aforementioned questions.
I say this as someone who deals with sales/CRO/CFO functions quite regulary, I have to tell everyone that uploading contracts to Claude and/or ChatGPT does not hold confidentiality because files are not covered under enterprise ZDRs. [0] [1]
It comes down to 'everyone else is doing it' without an understanding of why, then past that, the what of how that applies to the specific business to find the unique value of AI to an organization that does not touch external networks.
Please give your GC the links below, let them look over your contracts and obligations to ensure you aren't exposing risk for no real reason other than saving a couple seconds for something that a SDR/BDR level employee could do.
[0] https://code.claude.com/docs/en/zero-data-retention#what-zdr...
[1] https://developers.openai.com/api/docs/guides/your-data#zero...
Most people don’t understand the tech but they understand it involves moving data into a cloud service like Anthropic and may have risk or breach associated. I think people are generally deciding to take that risk. Executives decide to take these kinds of risks all the time. Our GC would inform us of the risk and we would say “thank you for flagging the concern but let’s proceed anyway.” This is going to vary in all companies and industries of course. Healthcare needs to be careful of hippa and there’s pii concerns as well. But generally, everyone feels brazen enough to go forward. I do hear what you’re saying though, have had several talks with our GC and they simply can’t keep up with the pace and the business isn’t so risk adverse we’d put the breaks on AI due to said risk. That said, we do have many things that eventually get treated as a POC to eventually build out an internal AI tool for to reduce the risks.
It’s an interesting time.
i am not hating ai or whatever. I am hating how every interaction now is some ridiculous clickbait format like "every X i know" type shit.
If its so obvious that everyone is doing it then you dont need "every executive i know takes a shit" .
every interaction is now laced with ulterior motives like op trying to pitch himself as ai expert to sell his courses or whatever. He is apparently going around blowing executives minds with claude cowork. so ridiculous.
>But I'm not saying it is good or bad.
Wait, you exposed people to a technology, taught them how to use it, then you are not going to own the implications of that action without teaching them about the risks or telling them how they need to ensure they don't shoot themselves in the face or violate their duty of care?
Do you understand what you are saying and the implications of that in the real world relative to the insurance contracts that they have?
Your company is associated with HIPAA, you should have a much higher standard than this.
Play the ball, not the man, dude. Hectoring people on the Internet because you're stressed out about something isn't going to magically fix how you feel. Digging into their profile to make it personal is three steps too far.
We are talking about one person's introduction of a technology to persons and the implications of that action within the framework of enterprise governance and risk, it is one in the same. If anything, who a person is, their knowledge of the domain and the associated implications that action has on the domain has relevancy where someone who is ignorant of implications may have more grace than someone who has the experience to know better. The passive lack of accountability or responsibility relative to that does matter given the context.
I think the one thing you are not taking into account is that the investors on average fundamentally don’t care. Scale arbitrage means that small companies are fundamentally about velocity - and if they get sued due to regulations that do not pierce the corporate veil, they just fold. And the ones that did not get sued make money for the vc. And figure out later how to be hipaa etc compliant. Basically, I’ve been seeing over the last 10 years VCs are not caring about insurance or corporate liability - sink rate is so high it is irrelevant.
For big corps - this is different. But modulo hipaa - this is why they are gung ho hi about binding arbitration - they are trying to match velocity to some degree - and mostly failing…
VCs and investors are a massive issue, which is ironic saying that here, but once you get into contracts with other businesses, it changes things for the business and the leadership within who do carry liability when things go wrong, especially when they have made attestations.
What we are talking about is the conclusion you leapt to from 20 seconds of looking for evidence to suit a conclusion. Nothing in their comment "These are largely friends and peers, so they ultimately own their own risks" insists these are all people working in or on healthcare. Friends could be ... friends? Like the kind outside of work. And if someone is a peer (again, we have to assume the "at work" part), there isn't much you can do to prevent them from doing what they will. Educating them about trigger safety may be the best thing you can do.
>Every executive/leader I've shown Claude Cowork to has gone from 'what is AI' to 'vibecoding whole apps' in weeks. [0]
I think this is where we have the issue in my tone and approach to my comments. My response was based off of the OP stating that the people who they were introduction were 'executives/leaders' and not 'friends', which has a very different connotation when it comes to information security, liability, responsibility, accountability, and ownership. It was only in their response to my question about risk ownership that they described the persons as friends.
If they had said 'friends' from the very beginning, instead of 'executive/leader' I would not have had the reaction than I did. The reason why I brought up HIPAA was because of 'executive/leader', since the idea of duty of care extends to leadership within any organization, especially those who are involved with healthcare, which they know based off of their company.
[0] https://news.ycombinator.com/item?id=48131968
But even your pullquote insists on begging the question. No one said "Every executive/ leader at my place of business who does nothing except work with PII data all day", you presumed it.
>"I’m a CFO and network regularly with other executives, board members who also are board members at other companies, investors, people who see a combined large population of companies"
I have already addressed this elsewhere. [0]
The call to HIPAA wasn't about PII, it was about knowledge around standards and regulations such as HIPAA when it comes to application/information/network security is just baked in. Which is why the passivity around the statement made no sense given the risks/obligations/liability associated with vibe coding applications at the executive level, which someone who's company deals with HIPAA should understand and appreciate.
Never have I said that, and please quote me word-for-word otherwise, what I said applied to "very executive/ leader at my place of business who does nothing except work with PII data all day", that is a windmill you created yourself.
You can keep tilting at the windmill.
[0] https://news.ycombinator.com/threads?id=Ucalegon#48133230
Stop digging.
I am not digging, I am being consistent.
But I appreciate you trying to police the expression of my deeply held beliefs, but, like, nope!
You have to understand that people like you, that you that keep talking about enterprise governance and risk, should facilitate business users to do these things securely. This should have always been the case but somehow it has ended up more with restricting rather than facilitating. Hopefully tools like claude code will prove the value add more easily, changing everything I hate about corp IT.
I appreciate the feeling but this isn't so much driven by principle but by business risk through contract liability or other liability that exists within whatever place you happen to be doing business.
'Adding value' is a very interesting statement and way to judge the worth of something. Adding value to who? And if that value add also causes massive harms, how do we reconcile that? So you build a brand new app with does all of the things that all of your total addressable market wants, but it also exposes all of the IP your existing clients, does that mean you will be able to achieve that TAM?
Corp IT does not exist in a vacuum. Understanding the why of that isn't a 'you should just accept this' but more 'how can we make this better and avoid mistakes already made by others'. I will always point to aviation and 'bold text is written in blood' as a great model to understand all of this not as a blocker but, instead, as a building block.
There is no way to facilitate untrained users in the healthcare space to vibe code real applications touching patient data. There is no magic policy, firewall, or "facilitation technique" which can make vibe coded software reliably meet contractual and regulatory obligations with a high degree of security in the healthcare space.
If you care about data privacy, especially your own protected health information, that sentence should give you a lot of comfort.
In a HIPAA environment, people who are sufficiently trained on how to develop regulated software securely are called "software engineers".
In my opinion, agents will replace the majority of the rest of businesses before they are good enough at agentic engineering to be able to autonomously develop software that safely and reliably can manage PHI without a single mistake.
It goes without saying: never trust your PHI to any company who is vibe coding in production.
You guys have jumped to so many conclusions it’s amazing.
You are assuming like 12 things that aren't true in this response.
Explicitly name them then.
What kind of risk do you see?
Depends on what types of apps are being built, what data they touch, and what those apps are exposed to from a network perspective. Ie; all of the fundamentals of information/network security. Generally speaking, most executives do not have an information/network security background but do have privileged access to extremely valuable information, even if an attacker just has access to their email.
> most executives do not have an information/network security background but do have privileged access to extremely valuable information, even if an attacker just has access to their email.
In a properly structured organization, of which there are many and who are required by regulations and/or best practices, senior executives tend to have need/role-based access to information, just like everyone else in the organization. So they may have access to strategic business information, but not patient records or payroll. They may have access to planning data, but not the financial records of individual or clients. Etc. etc.
Smaller or newer orgs may not have this compartmentalization, but in general I think the principle holds true for orgs over a certain number of folks in size.
I do not disagree with anything you said.
Generally, when it comes to 'privileged' information within an executives inbox it is business information or trust releastionships and not specific PII/PHI of an user. It was me being terrible at trying to impart that even the most begin seeming access may have major consequences even if it is not a total compromise of everything given the massive scope of 'what could happen' with executives vibe coding applications, like something managing their inbox past their EA, or something trivial seeming.
Right but your Head of HR may have access to the drive with employee PII in it, or your CTO may be able to view your IT team's password manager.
These are 'proper' (sometimes) access controls, but can still be abused. Not from email...but you get the idea.
What risks? You don't even known what they are building and you start the FUD train.
I found the Microsoft guy!
What does this even mean?
Just going on and on about compliance when you have no idea about the details. It’s a classic example of how IT fails most large orgs.
Compliance isn't required due to a vendor.
Compliance is due to the legal obligations thanks to local regulations and obligations that are defined through contracts with 3rd parties.
Saying 'found the Microsoft person' expresses a lack of understanding of the domain.
You kind of just proved my point. Sorry I should not have been joking but i don’t think you have a grasp what’s going on around you.
This is how IT acts in my enterprise orgs. There is absolutely a need for compliance and governance but unfortunately the people in these roles are typically not technically minded and have low incentives to innovate so you get these folks only really arguing for their jobs.
Cool story bro.
Do you think the MSFT sales person, or anyone who has the financial incentive to innovate, doesn't want you to innovate? They want you on Azure and O365 regardless, they don't care.
Hell, Microsoft will give you will give you 150k [0] of credits to do so.
But keep talking as if you have some magical, unique, special insight that escapes contracts and the law, compared to the people who, sadly, have to deal with reality.
[0] https://www.microsoft.com/en-us/startups
What is your deal about contract law? It’s not some mystical thing. You can get red lines with Anthropic, you can get a DPA with Anthropic. You keep going on and on about governance and contract law on a thread about how Claude Code is pretty useful for nontechnical people.
Risk is always nonzero but you can already today get pretty comfortable with most of these orgs with some customization in the contracts.
Does Anthropic's DPA provide indemnity to code thats produced from the product and any damages associated with security vulnerabilities within that code?
We are talking about vibe coded applications by executives and the risks that are associated with that, nothing within a DPA covers that. Please, be my guest, link an Anthropic DPA which includes indemnity for damages associated with the code produced.
Again, you keep showing your lacking of understanding of the domain in some really fundamental ways which shows that you haven't negotiated B2B contracts nor have you held a position of responsibility where you hold liability.
But keep responding because this feels more like therapy for you, and your feelings about people like me, rather than the realities of the exposure that come from vibe coded applications for executives.
I concede that I started the thread with a joke but wow you really are upset. Let’s take a step back. Apologies again for that joke it just the entire discussion reads like non-technical non-legal advice you get from the typical corporate IT.
Each entity and group have to consider the risks. I don’t think anything you’re trying to point at though is really useful for the discussion at hand. There is absolutely a use case for Claude code/cowork/codex and related tools to be used by non-technical folks. There is also a lot of figuring out in each of these groups. Unfortunately IT in most orgs in what I have seen have ignored the art of what’s possible for the last 3 years and now that we have hit this inflection point are scrambling to catch up but sadly the incentives are usually not aligned so they are really only incentivized to not take any risks.
> I concede that I started the thread with a joke but wow you really are upset.
You went further than "a joke."
You continued making aggressive, non-substantive remarks that were out of line.[0]
#1 > you have no idea about the details.
#2 > i don’t think you have a grasp what’s going on around you.
#3 > What is your deal about contract law? It’s not some mystical thing.
You wasted everyone's time.
[0] https://news.ycombinator.com/newsguidelines.html
If I am wasting your time then stop replying with links to the rules. Like I keep saying you guys are pointing out specific legal questions that only a business can answer and are not constructive to the main thread. Lots of leaps to conclusions and finger pointing which anecdotally aligns with what I have seen in corporate IT.
There is a fundamental difference between non-technical users from using Claude, or any other LLM, for whatever reason and whatever they produce being produced into production.
There are significant reasons why an organization would not want to use Cowork, because it does not fall under Anthropic's ZDR [0], which is a huge issue for... anyone dealing with anything sensitive.
What I think this comes down to is that you value velocity regardless of whatever the costs. We will get to see how that solves itself, there are going to be a lot of billable hours that are going to figure that out.
But none of this means that you have any idea what you are talking about nor do you understand why individuals or organizations act the way that they do.
You are free to do it better. Please do.
[0] https://code.claude.com/docs/en/zero-data-retention#what-zdr...
Again you’re raising a bunch of issues that don’t matter in this thread and can only be answered by the specific business groups that are trying to utilize tools like Claude code. They are mostly worthy questions but you are attacking them very specifically and honestly I don’t think relevant to the discussion where someone talked about show the art of possible to people.
So we have moved the goalposts to this point.
I am sorry you feel this way, it does not change the facts of whats being discussed, its just that you disagree and you lacked the initial courage or intellectual capabilities to express that constructively, so you had to obfuscate through providing nothing of value to the discussion via low value comments. I get that YOU don't think something, but just because YOU feel something doesn't make it valid, grounded in reason, or should be listened too.
Have a great rest of your day and weekend!
Others pointed it out better but you jumped to a conclusion in 30 seconds pointing out pointed legal and risk asks that don’t apply to the thread. Just look at the other threads of conversation where you go massively downvoted. You can capitalize YOU all you want but my point still stands. Yall are jumping to oddly specific conclusions that don’t matter in this thread. There is an absolutely interesting discussion around risk to be had but you attacking someone’s 30second paragraph about their anecdote does not open the door.
I get that you lack the intellectual capability and capacity to make the point yourself, which is why you refer to others without linking, to make the point on your own, its ok. I also understand that your own internal bias and lack of actual ownership/responsibility/liability, which might be tied to the intellectual deficiencies noted up top, to understand the danger of executives/leaders shipping applications given their access to information.
But you are totally free to build a company where there is no oppressive corporate IT, where there is always an incentive to innovate and grow, you can build that future.
The reason why that will not happen might be contained within the first ten words of the first sentence of my first paragraph, but you can prove me wrong. Let me be your motivation! Your dream should be your reality!
This guy's acting in bad faith. Sorry you got swept into this.
I know. I don't expect them to come up with anything, but its fun to see how far they will backtrack/change the goalposts and how much they will tie themselves into knots to try and justify their lack of integrity.
Says the “Cool story bro” guy.
My point has been consistent. You jumped to specific conclusions from a 30second post that adds little to the parent discussion.
> You can get red lines with Anthropic, you can get a DPA with Anthropic.
IMHO,
1. Dismissing attorney client privilege is reckless
2. and the vast majority of users aren't aware of what "customization in the contracts" is needed to enable autonomous agents or if it's already contractually allowed.
This is still a fair question:
> Do you, and those executives, own the risks associated with that practice? Are those risks actually indemnified?
I think you guys are hitting on very specific issues that would only be constructive in the context of the business group using these tools. There is a discussion but I don’t really see the point in this thread. I see some folks from more of an IT background pointing fingers instead of the discussion at hand. Absolutely groups need to work with their legal representation to figure out an acceptable level of risk. Everything has non-zero risk. But again none of these specific points really hit on anything for this thread.
> I understand the impulse to provide a UI to manage codebases, etc. […] 'I don't care what's happening, just ask Claude to do it'.
Reading the first part, I was going to say they don’t even care about whether or not there’s a codebase. It doesn’t matter; it could be all gremlins and hamsters in wheels for all they care, and for all they should care. All that matters is the functionality, the value it gives them.
We’re even getting disposable code now. Entire single-use ephemeral web apps, built on the go to enable, visualise, or simplify a specific thing, then thrown away.
Will it all lead to some trouble? Definitely. So did computers, and so did the internet.
Weird times. Fun times.
When I quit my day job and started Rails freelancing a big chunk of my work was from companies with "that tech guy" who had built a database in Microsoft Access that was vital to the department's operations. And then either left the company - or the app had started to fall apart under its own weight.
I would get called in to rewrite it, using a proper database, documented rules and ensure it stayed scalable - and everyone would be happy.
These Access "apps" were abominations from a technical point of view - but they got the job done without having to spend a load of money on off-the-shelf or bespoke software. And the "tech guy" made a valuable contribution to the company. It's only at a certain point that Access started to struggle.
I foresee the exact same thing happening in the near future - except we won't be building the replacement apps ourselves - we'll just know how to give the coding agents well-specified prompts and tell them when they're making a mistake.
I’m at exactly that point where it sounds like you were. I’ve done 3 Access to Rails conversions and I’m hunting for the next one. The one I’m on at the moment is supporting 5 branches over 2 countries and 2 independent machine shops. Even if I can understand what Access is doing under the hood there is no one left to ask why. And I have so many questions. Sit with the users, spec the feature, ground it in whatever data I can find. I don’t think that ever changes for SMEs that take this path (Access or Vibeccess) and need re-writes. I’m also very happy to do them. They are IMO giving me more valuable usage data than any design process ever could.
What is different on this one vs the others is I have Claude to help me data dive and write the boring CRUD parts. I am able to spend so much more time with users testing and getting feedback and just thinking deeply about how to structure things. The quality of what I’m building now has never been higher and I think it’s just because I have more time to spend with it.
My experience with AI has been almost wholly positive and I wonder if Rails is part of the reason. Such well established patterns and structure the agent one shots most things and I spend most of my time wrangling view code based on my preferences.
But at least you could basically follow their logic.
I think what a lot of us are concerned about is that the vibe-coded stuff bloats fast. It's so verbose and all over the place, that picking that thing apart will be a huge job, and relying on an AI to pick apart work that an AI already failed to maintain seem like wishful thinking.
It's literally "The AI is failing! Don't worry I'll just use AI to fix the AI!".
Yes, as long as context size increase and llm improve at least there's a way out through using AI but once the progress stops...
Huh? Even if progress somehow stopped, current models are already good enough to help -- and the quality of a given vibe-coded throwaway codebase will be higher the more recently it was created.
The worst I would ever get was "here's our Access database - can you rewrite it". That was utterly useless to me.
What I needed to do was sit with a user (not a manager/the person buying my services) and ask them to show me the different things they did with the software. Then I could write a spec for the actual _feature_ and would only need to look at the existing codebase if they needed data transferring across[1]. I don't see why our new LLM-based future would be any different
[1] Of course this meant I would leave out edge-cases and/or weird quirks of the system - often this was actually a bonus as they were either no longer relevant or worked that way because that was the only way they knew how to do it
> Then when Claude is down for an hour, they get visibly angry
Withdrawal symptoms. We've all been there.
Yeah I'm realizing now how many of you guys work in industries with no data security/protection requirements
Exactly. The tools aren't the rate limiting factor for me. I can automate an entire department right now with Claude but I can't because of regulations and audits. Basically, turning an error prone manual process into a probabilistic process that Claude would do far more accurately in the end than what we do now. The process wouldn't be "repeatable" though by the letter of the regulation so would open the company up to automated regulatory violations and existential fines. The technical issues for me are trivial but the regulations are insurmountable. The bubble is in the TAM. My work is exactly who Claude for Small Business would be aiming at but we can't do anything with these tools because of regulation. That is a huge % of the economy.
For me the much bigger problem is the data (and God knows what else) going to a third party. But yeah the non-repeatability doesn't pass the DoD audits either.
Makes me wonder though, how likely it is your field/industry/discipline/company/business is to be replaced by some small player who makes the risk, doesn't get caught or deterred early enough, and then either becomes large enough to sway the industry regulation or pay off or otherwise continue to deter enforcement onto them.
Isn't it the uber model? Isn't that likely where the future is to go with this new uncertain technology that will surely create new unthought of verticals?
The procurement and certification processes we go through are basically specifically designed to keep a scrappy startup with the next new idea from ever winning a contract without significant institutional buy-in, for reasons that will probably become clear as other sectors deal with the fallout of the past two fiscal quarters in maintenance costs
There are requirements they just don’t get enforced enough to matter
> Any app built on top of this stack to 'make it easier' is competing with 'I don't care what's happening, just ask Claude to do it'.
To put it another way, the customers of these frontier models are implicitly being competed against by the model itself.
Haha I can't even trust developers who know the dangers of what they're doing to vibe code responsibly
Executives in what industry out of curiosity?