So your argument is predicated on the scale of inspired work being the problem?
> They are not human beings and they do not participate in the social systems of human beings the way human beings do
I don't think this adds anything to the argument besides you using this as a reason analogies with humans can't be used to compare the specific concept of inspired works? I don't think this holds up.
Algorithms participating in social systems has nothing to do with whether inspired works have a moral claim to existence for some. The fact that your ethics system values the biological classification of the originator of inspired works is something that can't be reconciled into a general argument. I could make the claim that the prompt engineer is the artist in this case.
> capital concentrates even further into the hands of a few technological elite who make their money off of flouting existing laws
That can be said by the development of any technology. Fear of capital concentration is more a critique on capitalism than it is on technological development.
Difference in scale (order of magnitude) is difference in kind (in every area of life), so yes, scale can be argued as the problem.
> That can be said by the development of any technology. Fear of capital concentration is more a critique on capitalism than it is on technological development.
Technology does not exist in a vacuum. All of the utility and relevance of technology to humans is dependent on the social and economic conditions in which that technology is developed and deployed. One cannot possibly critique technology without also critiquing a social system, and typically a critique of technology is precisely a critique about its potential abuses in a given social system. And yes, that's what I'm attempting to do here.
> I don't think this adds anything to the argument besides you using this as a reason analogies with humans can't be used to compare the specific concept of inspired works? I don't think this holds up.
This is a fair point. One could argue that an LLM, properly considered, is just another tool in the artist's toolbox. I think a major distinction though, between and LLM and, say, a paintbrush or even a text-editor, or photoshop, is that these tools do not have content baked into them. An LLM is in a different class insofar as it is not simply a tool, but is also partially the content.
The use of two different LLMs by the same artist, with a the same prompt, will produce different results regardless of the intent of the so called artist/user. The use of a different paintbrush, by the same artist, with the same pictorial intention may produce slightly different results due to material conditions, but the artist is able to consciously and partially deterministically constrain the result. In the LLLM case, the tool itself is a partial realization of the output already and that output is trained on masses of works of unknown individuals.
I think this is a key difference in the "AI as art tool" case. A traditional tool does not harbor intentionality, or digital information. It may constrain the type of work you can produce with it, but it does not have inherent, specific forms that it produces regardless of user intent. LLMs are a different beast in this sense.
Law is a realization of the societal values we want to uphold. Just as we can't in principle claim that training of LLMs on scores of existing work is wrong solely due to the technical function of LLMs, we cannot claim that this process shouldn't be subject to constraints and laws due to the technical function of LLMs and/or human beings, which is precisely what the arguments by analogy try to do. They boil down to "well it can't be illegal since humans basically do the same thing" which is a hyper-reductive viewpoint that ignores both the complexities and novelty of the situation and the role of law in shaping willful societal structure, and not just "adhering" to natural facts.
> They are not human beings and they do not participate in the social systems of human beings the way human beings do.
Your original quote was not using the impact of the technology, it was disparaging the algorithmic source of the inspired work (by saying it does not participate in social systems the way humans do).
> I think a major distinction though, between and LLM and, say, a paintbrush or even a text-editor, or photoshop, is that these tools do not have content baked into them
LLMs, despite being able to reproduce content in the case of overtraining, do not store the content they are trained from. Also, the usage of "content" here is ambiguous so I assumed you meant the storage of training data.
To me, the content of an LLM is its algorithm and weights. If the weights can reproduce large swaths of content to a verifiable metric of closeness (and to an amount that's covered by current law) I can understand the desire to legally enforce current policies. The problem I have is against the frequent argument to ban generative algorithms altogether.
> The use of a different paintbrush, by the same artist, with the same pictorial intention may produce slightly different results due to material conditions, but the artist is able to consciously and partially deterministically constrain the result.
I would counter this by saying the prompts constrain the result. How deterministically depends on how well one understands the semantic meaning of the weights and what the model was trained on. Also, as a disclaimer, I don't think that makes prompts proprietary (for various different reasons).
> I think this is a key difference in the "AI as art tool" case. A traditional tool does not harbor intentionality, or digital information
Assigning "intent" is an anthropomorphism of the algorithm in my opinion as they don't have any intent.
I do agree with your last paragraph though, one (or even a group of) individual's feelings don't make something legal or illegal. I can make a moral claim as to why I don't think it should be subject to constraints and laws, but of course that doesn't change what the law actually is.
The analogies are trying to make this appeal in an effort to influence those who try to make the laws overly restrictive. There are many laws that don't make sense and logic can't change their enforcement. The idea is to make a logical appeal to those who may have inconsistencies in their value system to try and prevent more non-sensical laws from being developed.
[dead]