The problem with your thinking here is that we are creating artificial beings now that display and output the same subjectivity.
The argument you present like many arguments breaks down when the topic becomes self referential. It makes sense for other topics as analyzing subjectivity becomes pedantic when asking questions like why is the sky blue.
But now subjectivity itself is in question. The argument you present calls for the subjectivity of others to be taken as true because all reality breaks down if we don’t… but what’s suddenly stopping you from applying the same assumptions to an LLM? That is the heart of the problem. People are questioning whether the burden of subjectivity is applicable to LLMs.
Or another way to frame it… what makes humans rise to the level where we can assume their subjectivity is true? What is the mechanism and reasoning behind that? We can no longer simply assume human subjectivity is true because LLMs are now displaying outward behaviors that are indistinguishable from humans.
Also stop relying on the wonderings of old school philosophers. We are now in times where you can basically classify their ideas as historically foundational but functionally obsolete and outdated. Think deeper.
Haha hilarious. Heraclitus might be old school but Wittgenstein and Heidegger not so much. The state of the art in what might meaningly be said, proved or metaphysically challenged has changed little since their time.
At no point in my post did I mention artificial beings or LLMs. I made a counter claim about the need for proof towards the subjectivity of others.
But while I’m here, LLMs do not “display and output the same subjectivity” as human beings. They might produce similar textual outputs as those produced when human beings are forced to use computers to produce textual outputs, but that is only an tiny part of our way of being and way of potentially expressing subjectivity. It’s the totality of how those LLMs can express their subjectivity though.
One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win. This fails to capture much of our subjectivity in how it is intersubjectively attuned to others in ways more fundamental than textual outputs.
How so? If a person were confined to text only (a la Hawkins), does that qualify us to dismiss their subjectivity on the basis of the medium? Also, why can training not be at least analogized to the attunement to the popular intersubjective perception?