If they were 'clearly different' we would not have the concept of the CEO fraud attack:
https://www.barclayscorporate.com/insights/fraud-protection/...
That's an attack because trusted and untrusted input goes through the same human brain input pathways, which can't always tell them apart.
Your parent made no claim about all swans being white. So finding a black swan has no effect on their argument.
My parent made a claim that humans have separate pathways for data and instructions and cannot mix them up like LLMs do. Showing that we don't has every effect on refuting their argument.
>>> The principal security problem of LLMs is that there is no architectural boundary between data and control paths.
>> Exactly like human input to output.
> no nothing like that
but actually yes, exactly like that.