Since only the first one responds to any of Zitron's content that I've actually read, I'll respond only to that one:
It's not responsive at all to Zitron's point. Zitron's broader contention is that AI tools are not profitable because the cost of AI use is too high for users to justify spending money on the output, given the quality of output. And furthermore, he argues that this basic fact is being obscured by lots of shell games around numbers to hide the basic cash flow issue. For example, focusing on cost in terms of cost per token rather than cost per task. And finally, there's an implicit assumption that the AI just isn't getting tremendously better, as might be exemplified by... burning twice as money tokens on the task in the hopes the quality goes up.
And in that context, the response is "Aha, he admits that there is a knob to trade off cost and quality! Entire argument debunked!" The existence of a cost-quality tradeoff doesn't speak to whether or not that line will intersect the quality-value tradeoff. I grant that a lot turns on how good you think AI is and/or will shortly be, and Zitron is definitely a pessimist there.
Already in your first point you are mixing up two claims Ed also likes to mix up. The funny thing is these claims are in direct conflict with each other. There is the question of whether people find AI worth paying for given what they get. You seem to think this is in some doubt, meanwhile here are tons of people paying for it, some even begging to be allowed to pay more in order to get more. The labs have revenue growing 20% per month. So I think that version of the point is absurd on its face. (And that's exactly why my thing about the cost-quality tradeoff being real is relevant. At least we agree on the relationship between these points.)
Ed doesn’t really make that argument anymore. The more recent form of the point is: yes, clearly people are willing to pay for it, but only because the providers are burning VC money to sell it below cost. If sold at a profit, customers would no longer find it worth it. But that’s completely different from what you’re saying. And I also think that’s not true, for a few reasons: mostly that selling near cost is the simplest explanation for the similarity of prices between providers. And now recently we have both Altman and Amodei saying their companies are selling inference at a profit.
Are you referring to the post where I listed 4 claims and marked one ridiculous, one wrong, one unlikely, and one plausible?
He is not wrong about everything. For example, after Sam Altman said in January that OpenAI would introduce a model picker, Zitron was able to predict in March that OpenAI would introduce a model picker. And he was right about that.
Yes. In this thread about the profitability (or lack thereof) of OpenAI’s business model, I pointed out the part where you appeared to agree with Ed Zitron about the profitability (or lack thereof) of OpenAI’s business model. Like it seems like all of those posts were pretty clearly motivated by wanting to poke holes in Zitron’s criticism, their lack of profitability (which is central to his criticism) is where you declined to push back with any real argument.
Well yes. He is a journalist. Not an analyst. I don't think he is a prophet or anything nor is is a tech person so yeah his details on operation are off. The money business side though he seem to be the only person willing to poke holes in the whole AI thing publicly.
I'm not an AI hater. I genuinely hope it take over every single white collar job that exists. I'm not being sarcastic or hyperbolic. Only then will we be able to re-discuss what society is in a more humane way.
Since only the first one responds to any of Zitron's content that I've actually read, I'll respond only to that one:
It's not responsive at all to Zitron's point. Zitron's broader contention is that AI tools are not profitable because the cost of AI use is too high for users to justify spending money on the output, given the quality of output. And furthermore, he argues that this basic fact is being obscured by lots of shell games around numbers to hide the basic cash flow issue. For example, focusing on cost in terms of cost per token rather than cost per task. And finally, there's an implicit assumption that the AI just isn't getting tremendously better, as might be exemplified by... burning twice as money tokens on the task in the hopes the quality goes up.
And in that context, the response is "Aha, he admits that there is a knob to trade off cost and quality! Entire argument debunked!" The existence of a cost-quality tradeoff doesn't speak to whether or not that line will intersect the quality-value tradeoff. I grant that a lot turns on how good you think AI is and/or will shortly be, and Zitron is definitely a pessimist there.
Already in your first point you are mixing up two claims Ed also likes to mix up. The funny thing is these claims are in direct conflict with each other. There is the question of whether people find AI worth paying for given what they get. You seem to think this is in some doubt, meanwhile here are tons of people paying for it, some even begging to be allowed to pay more in order to get more. The labs have revenue growing 20% per month. So I think that version of the point is absurd on its face. (And that's exactly why my thing about the cost-quality tradeoff being real is relevant. At least we agree on the relationship between these points.)
Ed doesn’t really make that argument anymore. The more recent form of the point is: yes, clearly people are willing to pay for it, but only because the providers are burning VC money to sell it below cost. If sold at a profit, customers would no longer find it worth it. But that’s completely different from what you’re saying. And I also think that’s not true, for a few reasons: mostly that selling near cost is the simplest explanation for the similarity of prices between providers. And now recently we have both Altman and Amodei saying their companies are selling inference at a profit.
Ed Zitron: I don’t think OpenAI will become profitable
The link you posted: I think it is very plausible that it will be hard for OpenAI to become profitable
Are you referring to the post where I listed 4 claims and marked one ridiculous, one wrong, one unlikely, and one plausible?
He is not wrong about everything. For example, after Sam Altman said in January that OpenAI would introduce a model picker, Zitron was able to predict in March that OpenAI would introduce a model picker. And he was right about that.
Yes. In this thread about the profitability (or lack thereof) of OpenAI’s business model, I pointed out the part where you appeared to agree with Ed Zitron about the profitability (or lack thereof) of OpenAI’s business model. Like it seems like all of those posts were pretty clearly motivated by wanting to poke holes in Zitron’s criticism, their lack of profitability (which is central to his criticism) is where you declined to push back with any real argument.
Well yes. He is a journalist. Not an analyst. I don't think he is a prophet or anything nor is is a tech person so yeah his details on operation are off. The money business side though he seem to be the only person willing to poke holes in the whole AI thing publicly.
I'm not an AI hater. I genuinely hope it take over every single white collar job that exists. I'm not being sarcastic or hyperbolic. Only then will we be able to re-discuss what society is in a more humane way.