>If I and a company of 1000 people create the same product and compete for customers, the company's version will win. Every single time.

As a person who works for a company with 25k people, I would disagree. You, a single person will often get to the basic product that a lot of people will want much faster than a company with 1k, 5k and 25k people.

Bigger companies are constrained by internal processes, piles of existing stuff, and inability to hire at the scale they need and larger required context. Also regulation and all that. Bigger companies are also really slow to adapt, so they would rather let you build the product and then buy out your company with your product and people who build it. They are at at a temporary disadvantage every time the landscape shifts.

The point wasn't about the number of people, the point was a company which employs that number of people has enough money which can be converted to leverage against you.

Besides that, your whole arguments hinges on large companies being inflexible, inefficient and poorly run. Isn't that exactly the kind of problem AI promises to solve? Complete AI surveillance of every employee, tasks and instructions tailored to each individual and superhuman planning. Of course at that point, the only employees will be manual workers because actual AI will be much better and cheaper at everything than every human, except those things where it needs to interact with the physical world. Even contract negotiations with both employees and customers will be done with AI instead of humans, the human will only sign off on it for legal requirements just like today you technically enter a contract with a representative of the company who is not even there when you talk to a negotiator.

Large companies are often inflexible and inefficient as a matter of deliberate strategy. I've found myself in scenarios where we have a complete software artifact that a smaller company would launch and find successful, but we can't launch it, because we have to satisfy some expectation we've set or do a complex integration with some important other system of ours.

A lesson from gamedev is that players will deliberately restrict themselves - sometimes to make the game more fun or challenging, sometimes to appeal to their aesthetic principles.

If/when superhuman AI is achieved, those limitations will all go away. An owner will just give it money and control and tell it to optimize for more money or political power or whatever he wants.

That's a much scarier future than a paperclip maximizer because it's much closer and it doesn't require complete takeover first, it'll be just business as usual, except more somehow more sociopathic.