I understand where you are coming from - I’ve gone so far as to try to regularly write at work in E-Prime in the past. I definitely found that it forced me to think hard about what I was trying to convey, which ultimately improved the vocabulary I was using. I also definitely found it to be a lot of trouble the maintain with consistency.

The thing is, though, that LLMs don’t appear to trouble themselves at all when following E-Prime!

After a lot of conceptual refinement for the overall idea I had (minimizing hallucinations by prompt alone), it was almost trivial to make the LLM consistently use E-Prime everywhere.

You raise an interesting thought though: how to tweak this prompt such that it gets the LLM to avoid using E-Prime where it significantly reduces readability or dramatically increases cognitive load.

A classifier for “bullshit” detection has been on my mind.

Truth is the most problematic problem in philosophy, the introduction of the idea of the truth erodes the truth as seen in Godel's theorem or the reaction you get when you hear "9/11 truther." In many cases you can only determine the truth by physical observation, in other cases it is inaccessible. An A.I. that can determine the truth of things is a god.

A emotional tone or hostility detector, on the other hand, is ModernBERT + BiLSTM for the win. I'd argue the problem with fake news is not that it is fake but that it works on peoples emotions and that people prefer it to the real thing.

You can detect common established bullshit patterns and probably new ones that are like the old ones. 30 years from now there will be new bullshit patterns your model doesn't see.

I don't see how Gödel's theorems at all shows that "the introduction of the idea of the truth erodes the truth". Gödel's incompleteness theorems are mostly about provability anyway, which is a distinct concept from truth.

Though, Gödel's completeness theorem I suppose does relate provability to truth in that it shows that (for systems with the right kind of rules of inference) provability is equivalent to something being true in all models...

Still, are you sure Tarski's undefinability theorem isn't more relevant to your point?