Why the AI hate? How is it different from sharing your knowledge with another individual or writing a book to share it?

> AI companies are not paying anyone for that piece of information

So? For the vast majority of human existence, paying for content was not a thing, just like paying for air isn't. The copyright model you are used to may just be too forced. Many countries have no moral qualms about "pirating" Windows and other pieces of software or games (they won't afford to purchase anyway.) There's no inherent morality or entitlement for author receiving payment for everything they "create" (to wit, Bill Gates had to write a letter to Homebrew Computer Club to make a case for this, showing that it was hardly the default and natural viewpoint.) It's just a legal/social contract to achieve specific goals for the society. Frankly the wheels of copyright have been falling off since the dawn of the Internet, not LLM.

Its different because the AI model will then automate the use of that knowledge, which for most people in this forum is how they make their livelihood. If OpenAI were making robots to replace plumbers, I wouldn't be surprised when plumbers said "we should really stop giving free advice and training to these robots." Its in the worker's best interest to avoid getting undercut by an automated system that can only be built with the worker's free labor. And its in the interest of the company to take as much free labor output (e.g. knowledge) as possible to automate a process so they can profit.

> plumbers

I have received free advice that reduced future need from such actual plumbers (and mechanics and others for that matter)

> we should really stop giving free advice and training to these robots

People routinely freely give advice and teach students, friends, potential competitors, actual competitors, etc on this same forum. Robots? Many also advocate for immigration and outsourcing, presumably because they make the calculus that it is net beneficial in some scenarios. People on this forum contribute to an entire ecosystem of free software, on top of which two kids can and have built $100 billion companies that utilize all such technology freely and without cost. Let's ban it all?

Sure, I totally get if you want to make an individual choice for yourself to keep a secret sauce, not share your code, put stuff behind paywall. That is not the tone and the message here. There is some deep animosity advocating for everyone shutting down their pipes to AI as if some malevolent thing, similar to how Ted Kaczynski saw technology at large.

the AI isn't malevolent (... yet)

but the companies operating it certainly are

they have no concept of consent

they take anything and everything, regardless of copyright or license, with no compensation to the authors

and then use it to directly compete with those they ripped off

not to mention shoving their poor quality generated slop everywhere they can possibly manage, regardless of ethics, consent or potential consequences

children should not be supplied a sycophantic source of partial truths that has been instructed to pretend to be their friend

this is text book malevolence

> but the companies operating it certainly are

Which ones in particular? Is your belief all that are companies are inherently malevolent? If not why don't you start one that is not? What's stopping you?

All the ones that illegally downloaded books for one?

> Is your belief all that are companies are inherently malevolent? If not why don't you start one that is not?

Because the one I start will be beaten by the one that is malevolent if they have a weapon that is as powerful as AI. All these arguments about "we shared stuff before so what's the problem?" are missing the point. The point is that this is about the concentration of power. The old sharing was about distribution of power.

I don't think I need to give a list

> What's stopping you?

from doing what?

I don't want shitty AI slop; why would I start a company intent on generating it?

Companies valued at $300 billion or more are not another individual and people are not "sharing" their works. The companies are stealing them.

For the majority of interesting output people have paid for art, music, software, journalism. But you know that already and are justifying the industry that pays your bills.

> valued at $300 billion

Irrelevant really. Invoking this in the argument shows the basis is jealousy. They are clearly valued as such not because they collected all the data and stored in some database. Your local library is not worth 300 billion.

> For the majority of interesting output people have paid for art, music, software, journalism

Absolutely and demonstrably false. Music and art predate Copyright by hundreds if not thousands of years.

> But you know that already and are justifying the industry that pays your bills.

Huh, ad hominem much? I find it rich that the whole premise of your argument was some "art, music, software, journalist" was entitled to some payment, but suddenly it is a problem when "my industry" (somehow you assume I work in AI) is getting paid?

Copyright was only necessary with mass reproduction. The Gutenberg Bible does not yet qualify. The Berne Convention started in 1886, where the problem became more pressing.

And as I said, art was always paid for. In the case of monarchies, at least their advisers usually had good taste, unlike rich people today.

If you are talking about patronage and other forms of artist compensation, nothing about the economics of that is less robust today than ages ago. NFT craze of yesteryear is proof. So is OnlyFans success. Taylor Swift collects a billion bucks touring the country. AI will not change that; not negatively. If anything it will enrich the customer base and funnel more funds to them. The thing that AI does change is internet-wide impression-based and per-copy monetization.

Copyright is not the same as paying for it

Copying something isn't stealing, though.

  Interference with copyright does not easily equate with theft, conversion, or fraud. The infringer trespasses into the copyright owner’s domain, but he does not assume physical control over the copyright nor wholly deprive its owner of its use. Although it is no less unlawful or wrongful for that reason, it is not a theft.
Justice Blackmun, Dowling v. United States, 1985

Absolutely, I am sceptical of AI omin many ways, but primarily it is about the AI companies and my lack of trust in them. I find it unfortunate that all of the clearly brilliant engineers working at these companies are to preoccupied with always chasing newer and better model trying to reach the dream of AGI do not stop and ask themselves: who are they working for? What happens if they eventually manage to create a model that can replace most or even all of human computer work?

Why whould anyone think that these companies will contribute to the good of humanity when they are even bigger and more powerful, when they seem to care so little now?

"I find it unfortunate that all of the clearly brilliant engineers working at these companies are to preoccupied with always chasing newer and better model trying to reach the dream of AGI do not stop and ask themselves: who are they working for?"

Have you seen the people who do OpenAI demos? It becomes pretty apparent upon inspection, what is driving said people.

> For the vast majority of human existence, paying for content was not a thing

Books were bought, teachers were paid so no, for most of human history information was not free.

These vigorously held and loudly proclaimed opinions don't matter.

Don't waste the mental energy. They're more interested in performative ignorance and argument than anything productive. It's somewhere between trying to engage Luddites during the industrial revolution and having a reasonable discussion with /pol/ .

They'd rather cling to what they know than embrace change, or get in rhetorical zingers, and nothing will change that except a collision with reality.

Counterpoint: in my consulting role, I've directly seen well over a billion dollars in failed AI deployments in enterprise environments. They're good at solving narrow problems, but fall apart in problem spaces exceeding roughly thirty concurrent decision points. Just today I got involved in a client's data migration where the agent (Claude) processed test data instead of the intended data identified in the prompt. It went so far as to rename the test files to match the actual source data files and proceed from there, signalling the all clear as it did. It wasn't caught until that customer, in a workshop said, and I quote "This isn't our fucking data".

Using LLMs in many cases is a crypto-fad bubble, and people are throwing everything at the wall to see what sticks. There are a ton of grifters and twits, as well.

There's the AI industry, which you engaged with, which is more or less a flailing attempt to capitalize on the new technology, and which yields some results but has seen quite a staggering number of flops.

There's also the AI technology - progress in AI is on an exponential trend, tied to Moore's law, and has trillions of dollars of impetus in play, nearly completely decoupled from the market in general - I think we'll see at least a decade of progress increasingly accelerating, with massive world models and language models built on current architectures, but from a technical point of view, I believe we're only a couple breakthroughs from getting a truly general architecture.

The worst case scenario for AI is having to wait on sensor technologies and scans of the human brain. At some point, we will have a good enough, explicable, and analyzed model of human neural function and connectomes to create AI models that operate in the same way that the brain processes information.

We're probably 20 years or less from that point - the reason I say this is because of the fact that nearly all brain tissue is generalized - you don't have one type of mechanism for sight, another for thinking, another for feeling happy, another for remembering things - it all runs on the same basic substrate. Every time we map out a cubic millimeter of tissue, we're making progress towards understanding the algorithms by which we experience and process the world.

On the software, side, though, I suspect we're within a few years - one person with a profound insight will be able to make the leap between whatever it is that humans do and the way in which some AI model is processing, and put that insight into algorithmic form. There might be multiple insights along the path, but it is undeniable that progress is happening, and that the rate of progress is increasing day by day. AI just might already be capable enough to make that last little leap without human intervention.

We're in brute force territory, with massive ChatGPT and Grok models requiring billions of dollars of infrastructure and systems in place to achieve.

In 20 years, stuff like that will be achievable by an ambitious high school computer lab, or a rich nerd building things for kicks.

You can effectively put all of the text of the internet onto a dozen 2TB microSD cards. Throw in the pirate data sources, like scihub, pirated books, all the text out there, and maybe it'll take 20 of those microSD cards. $5k or less and you can store and own it all.

A phone in 2045 will have compute, throughput, and storage comparable with a state of the art GPU and server today, and we're likely to optimize in the direction of AI hardware between now and then.

The current AI startup bubble is going to collapse, no doubt, because LLMs aren't the right tool for many jobs, and frontier models keep eating the edge cases and niches people are trying to exploit. We'll see it settle into a handful of big labs, but those big labs will continue chugging along.

I'm not betting on stagnation, though, and truly believe we're going to see a handful of breakthroughs that will radically improve the capabilities of AI in the near term.

I agree with you. People like me are revisionists. Corporations and States are already rushing to build the most advanced AI, and advancement can be measured in months. We crossed the Rubicon many years ago.