Are there available numbers to support this? Software engineering in the U.S. is well-compensated. $200/mo is a small amount to pay if it makes a big difference in productivity.
My day job in talks to do that. I'm partly responsible for that decision, and i'm using my personal $200/m plan to test the idea.
My assessment so far is that it is well worth it, but only if you're invested in using the tool correctly. It can cause as much harm as it can increase productivity and i'm quite fearful of how we'll handle this at day-job.
I also think it's worth saying that imo, this is a very different fear than what drives "butts in seats" arguments. Ie i'm not worried that $Company will not get their value out of the Engineer and instead the bot will do the work for them. I'm concerned that Engineer will use the tool poorly and cause more work for reviewers having to deal with high LOC.
Reviews are difficult and "AI" provides a quick path to slop. I've found my $200 well worth it, but the #1 difficulty i've had is not getting features to work, but in getting the output to be scalable and maintainable code.
Sidenote, one of the things i've found most productive is deterministic tooling wrapping the LLM. Eg robust linters like Rust Clippy set to automatically run after Claude Code (via hooks) helps bend the LLM away from many bad patterns. It's far from perfect of course, but it's the thing i think we need most atm. Determinism around the spaghetti-chaos-monkeys.
Yes, but that doesn't mean they aren't finding real value
The challenge with the bubble/not bubble framing is the question of long term value.
If the labs stopped spending money today, they would recoup their costs. Quickly.
There are possible risks (could prices go to zero because of a loss leader?), but I think anthropic and OpenAI are both sufficiently differentiated that they would be profitable/extremely successful companies by all accounts if they stopped spending today.
So the question is: at what point does any of this stop being true?
> I think anthropic and OpenAI are both sufficiently differentiated that they would be profitable/extremely successful companies by all accounts if they stopped spending today.
Maybe. But that would probably be temporary. The market is sufficiently dynamic that any advantages they have right now, probably isn't stable defensible longer term. Hence the need to keep spending. But what do I know? I'm not a VC.
Anecdotally, I can take on and complete the side projects I've always wanted to do but didn't due to the large amounts of yak shaving or unfamiliarity with parts of the stack. It's the difference between "hey wouldn't it be cool to have a Monte Carlo simulator for retirement planning with multidimensional search for the safe withdrawal rate depending on savings rate, age of retirement, and other assumptions" and doing it in an afternoon with some prompts.
For curiosity, how complex are these side projects? My experience is that Claude Code can absolutely nail simple apps. But as the complexity increases it seems to lose its ability to work through things without having to burn tokens on constantly reminding it of the patterns it needs to follow. At the very least it diminishes the enjoyment of it.
Simple apps are the majority of use-cases though - to me this feels like what programming/using a computer should have been all along: if I want to do something I’m curious about I just try with Claude whereas in the past I’d mostly be too lazy/tired to program after hours in my free time (even though my programming ability exceed Claude’s).
I work at an Amazon subsidiary so I kinda have unlimited gpu budgets. I agree with siblings, I'm working on 5 side projects I have wanted to do as a framework lead for 7 years. I do them in my meetings. None of them are taking production traffic from customers, they're all nice to haves for developers. These tools have dropped the costs of building these tools massively. It's yet to be seen if they'll also make maintaining them the same, or spinning back up on them. But given AI built several of them in a few hours I'm less worried about that cost than I was a year ago (and not building them).
Have we seen any examples of any of these companies turning a profit yet even at $200+/mo? My understanding is that most, if not all, are still deeply in the red. Please feel free to correct me (not sarcastic - being genuine).
If that is the case at some point the music is going to stop and they will either perish or they will have to crank up their subscription costs.
I am absolutely benefitting from them subsidizing my usage to give me Claude Code at $200/month. However, even if they 10x the price its still going to be worth it for me personally.
I'm curious, how are you accounting this? Does the productivity improvement from Claude's product let you get your work done faster, which buys you more free time? Does it earn you additional income, presumably to the tune of somewhere north of $2k/month?
I totally get that but that’s not really what I asked/am driving at. Though I certainly question how many people are willing to spend $2k/mo on this. I think it’s pretty hard for most folks to justify basically a mortgage for an AI tool.
My napkin math is that I can now accomplish 10x more in a day than I could even one year ago, which means I don't need to hire nearly as many engineers, and I still come out ahead.
I use claude code exclusively for the initial version of all new features, then I review and iterate. With the Max plan I can have many of these loops going concurrently in git worktrees. I even built a little script to make the workflow better: http://github.com/jarredkenny/cf
> My napkin math is that I can now accomplish 10x more in a day than I could even one year ago, which means I don't need to hire nearly as many engineers, and I still come out ahead.
The only answer that matters is the one to the question "how much more are you making per month from your $200/m spend?"
Again I understand and I don’t doubt you’re getting insane value out of it but if they believed people would spend $2000 a month for it they would be charging $2000 a month, not 1/10th of that, which is undoubtedly not generating a profit.
As I said above, I don’t think a single AI company is remotely in the black yet. They are driven by speculation and investment and they need to figure out real quick how they’re going to survive when that money dries up. People are not going to fork out 24k a year for these tools. I don’t think they’ll spend even $10k. People scoff at paying $70+ for internet, a thing we all use basically all the time.
I have found it rather odd that they have targeted individual consumers for the most part. These all seem like enterprise solutions that need to charge large sums and target large companies tbh. My guess is a lot of them think it will get cheaper and easier to provide the same level of service and that they won’t have to make such dramatic increases in their pricing. Time will tell, but I’m skeptical
The point is that if a minority is prepared to pay $200 per month, then what is the majority prepared to pay? I also don’t think this is such an extreme priority, I also know multiple people in real life with these kinds of selections.
Yeah, cause we want to be in control of software, understandably. It's hard to charge for software users have full control of - except for donations. That's #1 reason for me to not use any gen AI at the moment - I'm keeping an eye on when (if) open-weight models become useful on consumer hardware though.
Are there available numbers to support this? Software engineering in the U.S. is well-compensated. $200/mo is a small amount to pay if it makes a big difference in productivity.
Which raises the question: If the productivity gains are realized by the employer, is the employer not paying this subscription?
My day job in talks to do that. I'm partly responsible for that decision, and i'm using my personal $200/m plan to test the idea.
My assessment so far is that it is well worth it, but only if you're invested in using the tool correctly. It can cause as much harm as it can increase productivity and i'm quite fearful of how we'll handle this at day-job.
I also think it's worth saying that imo, this is a very different fear than what drives "butts in seats" arguments. Ie i'm not worried that $Company will not get their value out of the Engineer and instead the bot will do the work for them. I'm concerned that Engineer will use the tool poorly and cause more work for reviewers having to deal with high LOC.
Reviews are difficult and "AI" provides a quick path to slop. I've found my $200 well worth it, but the #1 difficulty i've had is not getting features to work, but in getting the output to be scalable and maintainable code.
Sidenote, one of the things i've found most productive is deterministic tooling wrapping the LLM. Eg robust linters like Rust Clippy set to automatically run after Claude Code (via hooks) helps bend the LLM away from many bad patterns. It's far from perfect of course, but it's the thing i think we need most atm. Determinism around the spaghetti-chaos-monkeys.
Yes, but that doesn't mean they aren't finding real value
The challenge with the bubble/not bubble framing is the question of long term value.
If the labs stopped spending money today, they would recoup their costs. Quickly.
There are possible risks (could prices go to zero because of a loss leader?), but I think anthropic and OpenAI are both sufficiently differentiated that they would be profitable/extremely successful companies by all accounts if they stopped spending today.
So the question is: at what point does any of this stop being true?
> I think anthropic and OpenAI are both sufficiently differentiated that they would be profitable/extremely successful companies by all accounts if they stopped spending today.
Maybe. But that would probably be temporary. The market is sufficiently dynamic that any advantages they have right now, probably isn't stable defensible longer term. Hence the need to keep spending. But what do I know? I'm not a VC.
A very productive minority.
Are there studies to show those paying $200/month to openai/claude are more productive?
Anecdotally, I can take on and complete the side projects I've always wanted to do but didn't due to the large amounts of yak shaving or unfamiliarity with parts of the stack. It's the difference between "hey wouldn't it be cool to have a Monte Carlo simulator for retirement planning with multidimensional search for the safe withdrawal rate depending on savings rate, age of retirement, and other assumptions" and doing it in an afternoon with some prompts.
For curiosity, how complex are these side projects? My experience is that Claude Code can absolutely nail simple apps. But as the complexity increases it seems to lose its ability to work through things without having to burn tokens on constantly reminding it of the patterns it needs to follow. At the very least it diminishes the enjoyment of it.
Simple apps are the majority of use-cases though - to me this feels like what programming/using a computer should have been all along: if I want to do something I’m curious about I just try with Claude whereas in the past I’d mostly be too lazy/tired to program after hours in my free time (even though my programming ability exceed Claude’s).
I work at an Amazon subsidiary so I kinda have unlimited gpu budgets. I agree with siblings, I'm working on 5 side projects I have wanted to do as a framework lead for 7 years. I do them in my meetings. None of them are taking production traffic from customers, they're all nice to haves for developers. These tools have dropped the costs of building these tools massively. It's yet to be seen if they'll also make maintaining them the same, or spinning back up on them. But given AI built several of them in a few hours I'm less worried about that cost than I was a year ago (and not building them).
It's subjective, but the high monthly fee would suggest so. At the very least, they're getting an experience that those without are not.
Have we seen any examples of any of these companies turning a profit yet even at $200+/mo? My understanding is that most, if not all, are still deeply in the red. Please feel free to correct me (not sarcastic - being genuine).
If that is the case at some point the music is going to stop and they will either perish or they will have to crank up their subscription costs.
I am absolutely benefitting from them subsidizing my usage to give me Claude Code at $200/month. However, even if they 10x the price its still going to be worth it for me personally.
I'm curious, how are you accounting this? Does the productivity improvement from Claude's product let you get your work done faster, which buys you more free time? Does it earn you additional income, presumably to the tune of somewhere north of $2k/month?
I totally get that but that’s not really what I asked/am driving at. Though I certainly question how many people are willing to spend $2k/mo on this. I think it’s pretty hard for most folks to justify basically a mortgage for an AI tool.
My napkin math is that I can now accomplish 10x more in a day than I could even one year ago, which means I don't need to hire nearly as many engineers, and I still come out ahead.
I use claude code exclusively for the initial version of all new features, then I review and iterate. With the Max plan I can have many of these loops going concurrently in git worktrees. I even built a little script to make the workflow better: http://github.com/jarredkenny/cf
> My napkin math is that I can now accomplish 10x more in a day than I could even one year ago, which means I don't need to hire nearly as many engineers, and I still come out ahead.
The only answer that matters is the one to the question "how much more are you making per month from your $200/m spend?"
Again I understand and I don’t doubt you’re getting insane value out of it but if they believed people would spend $2000 a month for it they would be charging $2000 a month, not 1/10th of that, which is undoubtedly not generating a profit.
As I said above, I don’t think a single AI company is remotely in the black yet. They are driven by speculation and investment and they need to figure out real quick how they’re going to survive when that money dries up. People are not going to fork out 24k a year for these tools. I don’t think they’ll spend even $10k. People scoff at paying $70+ for internet, a thing we all use basically all the time.
I have found it rather odd that they have targeted individual consumers for the most part. These all seem like enterprise solutions that need to charge large sums and target large companies tbh. My guess is a lot of them think it will get cheaper and easier to provide the same level of service and that they won’t have to make such dramatic increases in their pricing. Time will tell, but I’m skeptical
The point is that if a minority is prepared to pay $200 per month, then what is the majority prepared to pay? I also don’t think this is such an extreme priority, I also know multiple people in real life with these kinds of selections.
>if a minority is prepared to pay $200 per month, then what is the majority prepared to pay?
Nothing. Most people will not pay for a chat bot unless forced to by cramming it into software that they already have to use
It's a generic chat LLM product, but ChatGPT now has over 20 million paid subscribers. https://www.theverge.com/openai/640894/chatgpt-has-hit-20-mi...
Forget chat bots, most people will not pay for Software, period.
This is _especially_ true for developers in general, which is very ironic considering how our livelihood is dependent on Software.
Yeah, cause we want to be in control of software, understandably. It's hard to charge for software users have full control of - except for donations. That's #1 reason for me to not use any gen AI at the moment - I'm keeping an eye on when (if) open-weight models become useful on consumer hardware though.
> Forget chat bots, most people will not pay for Software, period.
Apple says their App Store did $53B in "digital goods and services" the US alone last year. Thats not 100% software, but its definitely more than 0%