A few weeks ago I was asking an LLM to offer anti-heliocentric arguments, from the perspective of an intelligent scientist. Although it initially started with what was almost a parody of writing from that period, with some prompting I got it to generate a strong rendition of anti-heliocentric arguments.
(On the other hand, it's very hard to get them to do it for topics that are currently politically charged. Less so for things that aren't in living memory: I've had success getting it to offer the Carthaginian perspective in the Punic Wars.)
That's a fun idea; almost having it "play pretend" instead of directly asking it for strong anti-heliocentric arguments outright.
It's weird to see which topics it "thinks" are politically charged vs. others. I've noticed some inconsistency depending on even what years you input into your questions. One year off? It will sometimes give you a more unbiased answer as a result about the year you were actually thinking of.
I think the first thing is figuring out exactly what persona you want the LLM to adopt: if you have only a vague idea of the persona, it will default to the laziest one possible that still could be said to satisfy your request. Once that's done, though, it usually works decently, except for those that the LLM detects are politically charged. (The weakness here is that at some point you've defined the persona so strictly that it's ahistorical and more reflective of your own mental model.)
As for the politically charged topics, I more or less self-censor on those topics (which seem pretty easy to anticipate--none of those you listed in your other comment surprise me at all) and don't bother to ask the LLM. Partially out of self-protection (don't want to be flagged as some kind of bad actor), partially because I know the amount of effort put in isn't going to give a strong result.
> The weakness here is that at some point you've defined the persona so strictly that it's ahistorical and more reflective of your own mental model.
That's a good thing to be aware of, using our own bias to make it more "likely" to play pretend. LLMs tend to be more on the agreeable side; given the unreliable narrators we people tend to be, and the fact that these models are trained on us, it does track that the machine would tend towards preference over fact, especially when the fact could be outside of the LLMs own "Overton Window".
I've started to care less and less about self-censoring as I deem it to be a kind of "use it or lose it" privilege. If you normalize talking about censored/"dangerous" topics in a rational way, more people will be likely to see it not as much of a problem. The other eventuality is that no one hears anything that opposes their view in a rational way but rather only hears from the extremists or those who just want to stick it to the current "bad" in their minds at that moment. Even then though I still will omit certain statements on some topics given the platform, but that's more so that I don't get mislabeled by readers. (one of the items on my other comment was intentionally left as vague as possible for this reason) As for the LLMs, I usually just leave spicy questions for LLMs I can access through an API of someone else (an aggregator) and not a personal acc just to make it a little more difficult to label my activity falsely as a bad actor.
What were its arguments? Do you have enough of an understanding of astronomy to know whether it actually made good arguments that are grounded in scientific understanding, or did it just write persuasively in a way that looks convincing to a layman?
> I've had success getting it to offer the Carthaginian perspective in the Punic Wars.
This is not surprising to me. Historians have long studied Carthage, and there are books you can get on the Punic Wars that talk about the state of Carthage leading up to and during the wars (shout out to Richard Miles's "Carthage Must Be Destroyed: The Rise and Fall of an Ancient Civilization"). I would expect an LLM to piggyback off of that existing literature.
Extensive education in physics, so yes.
The most compelling reason at the time to reject heliocentrism was the (lack of) parallax of stars. The only response that the heliocentrists had was that the stars must be implausibly far away. Hundreds of billions of times further away than the moon is--and they knew the moon itself is already pretty far from us-- which is a pretty radical, even insane, idea. There's also the point that the original Copernican heliocentric model had ad hoc epicycles just as the Ptolemaic one did, without any real increase in accuracy.
Strictly speaking, the breakdown here would be less a lack of understanding of contemporary physics, and more about whether I knew enough about the minutia of historical astronomers' disputes to know if the LLM was accurately representing them.
>I've had success getting it to offer the Carthaginian perspective in the Punic Wars.)
That's honestly one of the funniest things I have read on this site.
Have you tried abliterated models? I'm curious if the current de-censorship methods are effective in that area / at that level.