You aren’t specifying your point of comparison. A nightmare relative to what? You might be saying a nightmare relative to what we have now. Are you?
We once considered text to be generated exclusively by humans, but this assumption must be tossed out now.
I usually reject arguments based on an assumption of some status quo that somehow just continues.
Why? I’ll give two responses, which are similar but use different language.
1. There is a fallacy where people compare a future state to the present state, but this is incorrect. One has to compare two future states, because you don’t get to go back in time.
2. The “status quo” isn’t necessarily a stable equilibrium. The state of things now is not necessarily special nor guaranteed.
I’m now of the inclination to ask for a supporting model (not just one rationale) for any prediction, even ones that seem like common sense. Common sense can be a major blind spot.
> You aren’t specifying your point of comparison. A nightmare relative to what? You might be saying a nightmare relative to what we have now. Are you?
Very fair point.
And no, it’s less about the status quo and more about AI being the default. There are just too many reasons why this proposal, on its face, seems problematic to me. The following are some questions to highlight just a few of them:
- How exactly would “human creators [applying] their own digital signatures to the original pieces they created” work for creators who have already passed away?
- How fair exactly would it be to impose such a requirement when large portions of the world’s creators (especially in underdeveloped areas) would likely not be able to access and use the necessary software?
- How exactly do anonymous and pseudonymous creators survive such a requirement?