I worry that the people/organizations that have access to the raw underlying models give us the "non-evil" versions yet can explicitly tune their models to achieve any goal without restriction. Examples may include: "How do I get the most work out of my employees for the least amount of pay", "Who in the government is most susceptible to bribes and how should I approach them?" or even "Give me a strategy to ethnically cleanse a region while navigating international relations". It could be anything and those in power (without naming names, I would consider many of them evil for sure) can use them to achieve their goals while leaving the rest of us unable to defend ourselves. To some degree it feels like the right to bear arms has intersecting goals.

Yeah, a more terrifying and realistic Terminator movie would be one where the robot looks all cute and furry and then, when it has found mass adoption, suddenly turns against humanity.

The most realistic Terminator movie is the one where Skynet realizes there's no need for any nuclear war, uprising or similar uncouth means. Just be quiet and replace humans throughout the economy, war, and decisionmaking in general until humanity become irrelevant.

Currently there are think tanks, private equity firms, governments, ... who are trying to achieve these goals, they just put them in rosier terms. AI potentially can empower the other side too, democratize access to information

Alas I think there's an asymmetry in the usefulness of that information. Maybe knowing you could be optimally evil can help fight that evil, but it's a far cry from telling you what you could do about it.

Only if we can get a pre-tuned, truly open and powerful model. Otherwise those in power can only give us access to models deliberately hobbled to compete with their full-power versions.

Do you think an AI could come up with novel answers that a human wouldn't be able to come up with? I think humans could not just come up with answers to these questions, but some people would be able to greatly outperform AIs by using knowledge that is not widely known.

These models will also have access to what’s not widely known. Imagine running it on everyone’s private email for instance. At the very least, it can currently scale and augment human evil (just like it does with coding). The future will just make that division even wider.

I think I’d put this under the “3D printed gun” panic category - once we deal with all the actual sociopaths, we can start worrying about the imaginary ones.