I agree and disagree. It's certainly the case that facts imply underlying epistemologies, but it completely misses the point to treat that like it entails catastrophic relativism.

Building up an epistemology isn't just recreational, ideally it's done for good reasons that are responsive to scrunity, standing firm on important principles and, where necessarily, conciliatory in response to epistemological conundrums. In short, such theories can be resilient and responsible, and facts based on them can inherent that resilience.

So I think it completely misses the point to think that "facts imply epistemologies" should have the upshot of destroying any conception of access to authoritative factual understanding. Global warming is still real, vaccines are still effective, sunscreen works, dinosaurs really existed. And perhaps, more to the point in this context, there really are better and worse understandings of the fall of Rome or the Dark Ages or Pompeii or the Iraq war.

If being accountable to the theory-laden epistemic status of facts means throwing the stability of our historical understanding into question, you're doing it wrong.

And, as it relates to the article, you're doing it super wrong if you think that creates an opening for a notion of human intuition that is fundamentally non-informational. I think it's definitely true that AI as it currently exists can spew out linguistically flat translations, lacking such things as an interpretive touch, or an implicit literary and cultural curiosity that breathes the fire of life and meaning into language as it is actually experienced by humans. That's a great and necessary criticism. But.

Hubert Dreyfus spent decades insisting that there were things "computers can't do", and that those things were represented by magical undefined terms that speak to ineffable human essence. He insisted, for instance, that computers performing chess at a high level would never happen because it required "insight", and he felt similarly about the kind of linguistic comprehension that has now, at least in part, been achieved by LLMs.

LLMs still fall short in critical ways, and losing sight of that would involve letting go of our ability to appreciate the best human work in (say) history, or linguistics. And there's a real risk that "good enough" AI can cause us to lose touch with such distinctions. But I don't think it follows that you have to draw a categorical line insisting such understanding is impossible, and in fact I would suggest that's a tragic misunderstanding that gets everything exactly backwards.

I agree with this whole-heartedly.

Certainly some facts can imply a certain understanding of the world, but they don't require that understanding in order to remain true. The map may require the territory, but the territory does not require the map.

“Reality is that which, when you stop believing in it, doesn't go away.” ― Philip K. Dick

In this analogy though, maps are the only things we have access to. There may be Truth, but we only approximate it with our maps.

It's very true that we only approximate truth with our maps. All abstractions are leaky, but that fact does not imply "catastrophic relativism" (as the grandfather post phrased it). It just implies that we need better, more accurate, maps.

Or, to return the topic of the post, it just means that our translations need to try a little harder, not that human quality translation is impossible to do via machine.

I think it's very important to remember that objective truth exists, because some large percentage of society has a political interest in denying that, and we're slipping ever closer into Sagan's "Demon Haunted World."

We require the map.