That's not how this works. LLM serving at scale processes multiple requests in parallel for efficiency. Reduce the parallelism and you can process individual requests faster, but the overall number of tokens processed is lower.
That's not how this works. LLM serving at scale processes multiple requests in parallel for efficiency. Reduce the parallelism and you can process individual requests faster, but the overall number of tokens processed is lower.
They can now easily decrease the speed for the normal mode, and then users will have to pay more for fast mode.
Do you have any evidence that this is happening? Or is it just a hypothetical threat you're proposing?
These companies aren't operating in a vacuum. Most of their users could change providers quickly if they started degrading their service.
They have contracts with companies, and those companies wont be able to change quickly. By the time those contracts will come back for renewals it will already be too late, their code becoming completely unreadable by humans. Individual devs can move quickly but companies don't.
Are you at all familiar with the architecture of systems like theirs?
The reason people don't jump to your conclusion here (and why you get downvoted) is that for anyone familiar with how this is orchestrated on the backend it's obvious that they don't need to do artificial slowdowns.
I am familiar with the business model. This is clear indication of what their future plan is.
Also, I just pointed out at the business issue, just raising a point which was not raised here. Just want people to be more cautious