> malicious AI agents might hide bad intent and actions by communicating in a dense, indecipherable way while presenting only normal intent and actions in their natural language output.
you could edit this slightly to extract a pretty decent rule for governance, like so:
> malicious agents might hide bad intent and actions by communicating in a dense, indecipherable way while presenting only normal intent and actions in a natural way
It applies to ai, but also many other circumstances where the intention is that you are governed - eg medical, legal, financial.
Thanks!
Easier said than done:
• https://en.wikipedia.org/wiki/Cant_(language)
• https://en.wikipedia.org/wiki/Dog_whistle_(politics)
Or even just regional differences, like how British people, upon hearing about "gravy and biscuits" for the first time, think this: https://thebigandthesmall.com/blog/2019/02/26/biscuits-gravy...
> It applies to ai, but also many other circumstances where the intention is that you are governed - eg medical, legal, financial.
May be impossible to avoid in any practical sense, due to every speciality having its own jargon. Imagine web developers having to constantly explain why "child element" has nothing to do with offspring.