Regardless of the purported upside, many people in the arts feel betrayed by the commercial interests that built this technology on their work without their consent and threatened by the explicit intent of these vendors to devalue their work by saturating the art and design market with cheap automated substitution.
A lot of artists who would love to be able to direct their professional software in natural language have to reconcile that with how this technology came to be and what the aims are of the company now delivering it to them.
I spent most of my career in the open source world and doesn’t bother me models are trained on my output. Should I feel differently? It seems there’s a kind of ego or emotional attachment to the output that is more common among artists than devs? Perhaps abundance vs scarcity mindsets?
Regarding generative images, it's more of an issue because the effects are different.
Software tends to be a "living" project, so just vibe coding with 0 software knowledge is not yet fully sustainable for maintaining a project. But with art, the AI just spits out a completed image.
The generated images compete directly with the people the data was sourced from, and there have also been many cases of abuse, eg people using AI to impersonate a popular artist and selling comissions under that artist's name.
The copyright situation for generated imagery is also tricky, so people pretending to be artists only to be sharing work that isn't copyrightable can cause a ton of trouble and financial loss for customers.
Most of these issues don't apply to software in the same way. That's why I was surprised by the backlash to this as it's just touching the software side, I don't see this as threatening artist's work.
When I was dabbling in image generation (~StyleGAN2 era), my vision for image generation models was as a support tool for artists (back then I was generating small character thumbnails to help me brainstorm ideas for drawing), believing that people valued art for the human effort. Even then I would have considered what Anthropic are trying to do here as the preferable way to use AI in art workflows.
It threatens because we aren’t just talking about selling your art. Artists get hired at companies to produce all kinds of work that will now be replaced by AI.
Artists get hired at companies because companies have the technology that made the artists work profitable, starting from book printing (public performance -> book printing -> cinema -> tv -> internet, similar to drawing -> photo -> digital). At the Public Performance / Drawing Era artists were mostly poor low class rogues. The technology made them what they are now.
They are protesting against natural technology development. To me it looks similar to taxi drivers protesting against Uber (protecting their right to scam tourists).
Did drawing artists protest against photography? Do celebrities protest against photographers selling their photos taken by them in public places?
They are right to be afraid though. What's really happening here most probably is Anthorpic buys rights to collect user trajectory data. In order to replace Blender users later.
I'm an artist turned CTO. My perspective is really simple - theft is theft. You (not you specifically per se) can sugar coat it however you like, but copying open source codebases/work is different from stealing proprietary/licensed work without permission. It would have been ok if stealing/sharing copyrighted work was heavily normalized, but no, a lot of people have gone to prison for simply pirating DVDs and CDs and now you're telling me it's somehow ok if a corporation does it?
Theft is theft, but learning is not theft.
Neither is fair use, and neither is copyright infringement. But learning most definitely is not theft.
How come? We give IP law / copyright legitimacy but it’s not clear to me the more I think about it. If you draw something you def own the physical drawing but owning the idea of the drawing during your lifetime feels strange to me. It’s also a very recent invention and humans created art before and will create after.
I agree that copyright is foundationally wrong, but the way out has to be through a culture shift of people putting their work in Public Domain. It's not up to a private company to decide everyone else's work is public commons.
The issue is not stealing the idea itself. The issue is stealing the work in its entirety - as is - with all its flaws and character intact. That's what makes art unique, right?
I would think the same goes for codebases too. On a personal note, I wrote a CMS in Elixir from scratch way before even AI was a thing. It uses a lot of proprietary flows to make it scale, helping it serve millions of requests efficiently. I certainly did not give OpenAI nor Microsoft permission to steal my code. And yet they did. Is that not theft of my Intellectual Property?
> but owning the idea of the drawing during your lifetime feels strange to me
Oh, I wish it was limited to lifetime.
USA is currently lifetime + 70 years, and work for hire is 95 years from creation.
> It would have been ok if stealing/sharing copyrighted work was heavily normalized, but no, a lot of people have gone to prison for simply pirating DVDs and CDs and now you're telling me it's somehow ok if a corporation does it?
There is no such thing as "stealing" copyrighted work. Either you have unauthorized access and/or distribution, or you don't.
Unauthorized access to copyrighted work is perfectly legal in a big chunk of the world, including western Europe. Read up on the french tradition of copyright law, particularly the provisions for personal use.
This brings us to how "people have gone to prison for simply pirating DVDs and CDs". The bulk of the cases were focused on mass commercial distribution of verbatim copies of third-party content. I'm talking about DVD-burning factories.
Yes?
For example, you could least feel that the world is large enough to have people with other needs, drives and ownership levels of their work.
You could also consider that this is not an even trade; artists had all their works ingested and didn’t get a commensurate stake in openAI.
You can consider that you had a choice to share when you contributed to open source. Then imagine how a counter culture artist, who despises corporate culture, must feel to have their work consumed by another rapacious tech entity.
Or you can be the filmmaker whose clients are now showing up with entire ad clips, and then decide they would rather not spend the money on CGI to complete the video - essentially demolishing work overnight.
This isn’t to say that there are not artists who are excited by this, or artist who are happy to have their art ingested. Just that the way you phrased your question evoked this answer.
Speaking as someone who works in the industry, I haven't really heard this sentiment. Artists are predominantly hostile to diffusion models, but optimistic about LLMs and their ability to help them write tools and scripts even if they're non-technical.
So basically artists are cool with developers not getting paid so long as they do...
Yeah, I can understand being upset with their work being stolen to train these models. Anthropic doesn't seem to be working on image/video generation, but they are still training on text-based creative works of questionable sourcing.
Makes me think that there's some room in the model lineup for one that doesn't do as well on benchmarks, but is trained on "ethically sourced" data (though they'd need to somehow prove that they aren't "accidentally" including other data).