That's a strange position to take. I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears.
Given how much software and other AI/computer vision improvements 3D content often relies on, it's weird to decide that the algorithm itself is unallowable.
Do you have any idea how hard it already is to make a living in a creative field?
This is a very first degree analysis.
AI is seen as an oppressor and a threat, and AI providers are seen as oppressors. It's understandable that people don't want to collaborate with their oppressors, either direct or by association. If you were a Jew, would you buy shoes from the Nazis just because you were individually safe from them at that moment? Or would you if you were of a minority they hadn't started exterminating yet? Or if they were not exactly the Nazis killing your people but some affiliated group?
This sounds extreme until you realize they are under threat of losing their likelihood for good.
They are right to not accept your inevitability point without a fight, this is a human thing that can be fought, revolutions have happened, and will continue to happen.
I don't necessarily agree with this but I do understand it.
> I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears.
"If you ignore their biggest, their primary, concern, their other concerns seem almost trivial".
I literally said I understand if the training data sourcing is their primary concern.
he meant that that's not the primary concern. the sourcing of the data is a red herring, they care about losing their ability to make a living doing the thing that they love that is so central to their identity
I think I'm not sure how to parse your statement... I don't think there'd be much care for (or need for) the UX change if it wasn't for the whole ideological/valid fear about training AI on creative works? But it has been a long day, so I apologize.
I've been all over the place with my thoughts, so it's fair for you to be unsure of how to parse what I said. When making my initial post, I was thinking "this is a coding model, it isn't an image/3d model generation model, so why do they care?". I further interpreted make3 as saying that 3d artists were opposed to AI in general because they view any AI use as trending towards taking away their jobs.
So, what I meant when I said '... otherwise ...' wasn't trying to dismiss the data sourcing concern, but more like "I understand if the data sourcing is the concern, but you (make3) seem to be saying it's about the use of AI in general (ie even if, hypothetically, an ethically sourced training dataset was used for a model), which feels like a weird restriction to me". That was when I added the edit to my initial post.