It’s interesting that you say that because besides the other perspectives on this type of matter, something I have come across is accusations of AI text that at the very least were not at all clearly AI, but also seemed like the accusation was simply a coping mechanism to deflect/evade having to accept or face new informatio/reality that was counter to one’s mental model or framework.

I think of that recent situation where video showed two black bags supposedly being thrown out of a White House window. I don’t really care enough to find out whether or not that video was real, but I did find it interesting that Trump immediately dismissed it as AI after immediately glancing at it. Regardless of whether it was real or not, it seems to me that his immediate “that’s AI” response was just a rather new form of lie, a type of blame shifting to AI.

I would argue that as stupid and meaningless as that kind of example is, a better response would have been something like “we will look into it” and then moving on. But it also feels like blaming AI for innocuous things preconditioned the public to deny and gaslight the public on other, more important things, e.g., for example claiming that Israel raining down bombs on civilian people in Gaza and mass murdering probably hundreds of thousands of innocent people in what looks like the start to the Terminator wars, is merely a figment of your imagination because you will be told that AI was used and AI will be scrubbed off that information so you also will never be told about it. It’s memory holed in the TelescreenAI.

These types of developments don’t exactly fill me with optimism. Remember how in 1984 the war never ended, always changed, while at the same time both always existed and also did not actually exist? It feels like we are heading in that direction, the gaslighting form here on out, especially in all the forms of overt and clandestine war will be so off the charts that it will likely cause unpredictable mass “hysterias” and various undulations in societies.

Most people have no idea just how much media is used to train humans like an AI would be trained or controlled, now throw in ever more believable AI generated audio, visual, and not even to mention the text slop.

I think you're veering too far into politics on what was originally not a very political OP/thread, but I'll indulge you a tiny bit and also try to bring the thread back to the original theme.

You said a lot of words that I basically boil down to a thesis of, the value of "truth" is being diluted in real-time across our society (with flood-the-zone kinds of strategies), and there are powerful vested interested who benefit from such a dilution. When I say powerful interests, I don't meant to imply Illuminati and Freemasons and massive conspiracies -- Trump is just some angry senile fool with a nuclear football, who as you said has learned to reflexively use "AI" as the new "fake news" retort to information he doesn't like / wishes weren't true. But corporations also benefit.

Google benefited tremendously from inserting itself into everyone's search habits, and squeezed some (a lot of) ad money out of being your gatekeeper to information. The new crop of AI companies (and Google and Meta and the old generation too) want to do the same thing again, but this time there's a twist -- whereas before the search+ads business could spam you with low-quality results (in proto-form, starting as the popup ads of yesteryear), but it didn't necessarily directly try to attack your view of "truth". In the future, you may search for a product you want to buy, and instead of serving you ads related to that product, you may be served disinformation to sway your view of what is "true".

And sure negative advertising always existed (one company bad-mouthing another competitor's products), but those things took time and effort/resources, and also once upon a time we had such things as truth-in-advertising laws and libel laws but those concepts seem quaint and unlikely to be enforced/supported by this administration in the US. What AI enables is "zero marginal cost" scaling of disinformation and reality distortion, and in a world where "truth" erodes, instead of there being a market incentive for someone to profit off of being more truth-y than other market participants, on the contrary I would except that the oligopolistic world we live in would conclude that devaluaing truth is more profitable for all parties (a sort of implicit collusion or cartel-like effect, with companies controlling the flow of truth, like OPEC controlling their flow of oil).

Why would you think it matters what you think? Keep your pretentious, supremacist narcissism to yourself and tell those you abuse what to do, because that is not going to matter here.