Knowledge should be free. Unfortunately, OpenAI and most other AI companies are for-profit, and so they vacuum up the commons, and produce tooling which is for-profit.
If you use the commons to create your model, perhaps you should be obligated to distribute the model for free (or I guess for the cost of distribution) too.
I don't pay OpenAI and I use their model via ChatGPT frequently.
By this logic one shouldn't be able to research for a newspaper article at a library.
And no doubt you understand that this is the current state, not a stable equilibrium.
They'll either go out of business or make better models paid while providing only weaker models for free despite both being trained on the same data.
journalism and newspapers indeed should not be for-profit, and current for-profit news corporations are doing harm in the pursuit of profit.
> vacuum up the commons
A vacuum removes what it sucks in. The commons are still as available as they ever were, and the AI gives one more avenue of access.
> The commons are still as available as they ever were,
That is false. As a direct consequence of LLMs:
1. The web is increasingly closed to automated scraping, and more marginally to people as well. Owners of websites like reddit now have a stronger incentive to close off their APIs and sell access.
2. The web is being inundated with unverified LLM output which poisons the well
3. More profoundly, increasingly basing our production on LLM outputs and making the human merely "in the loop" rather than the driver, and sometimes eschewing even the human in the loop, leads to new commons that are less adapted to the evolutions of our world, less original and of lesser quality
> for-profit
I presume you (people do) have exploited that knowledge that society has made in principle and largely practice freely accessible to build a professionality, which is now for-profit: you will charge parties for the skills that available knowledge has given you.
The "profit" part is not the problem.