>> But this does seem like it will be a godsend for researchers working on things like music classification and generation. The only thing is, you can't really publicly admit exactly what dataset you trained/tested on...?

Didn't Meta already publicly admit they trained their current models on pirated content? They're too big to fail. I look forward to my music Slop.

They are too big to fail but they aren’t too big to have to pay out a huge settlement. Facebook annual revenue is about it twice that of the entire global recording industry. The strategy these companies took was probably correct but that calculation included the high risk of ultimately having to pay out down the line. Don’t mistake their current resistance to paying for an internal belief they never will have to.

> They are too big to fail but they aren’t too big to have to pay out a huge settlement. Facebook [...]

I think it's pretty clear from history that they are too big to have to pay out a huge settlement.

First, they never had to. There was never a "huge" settlement, nothing that actually did hurt.

Second, the US don't do any kind of antitrust, and if a government outside the US tries to fine a US TooBigTech, the US will bully that government (or group of governments) until they give up.

Anthropic had to pay $1.5 billion recently so you're incorrect. I'm sure more of such cases will come up against big tech too.

It's obviously more profitable to pay the fine than to not do the illegal thing in the first place, so I am correct.