IIRC that isn't possible under current models at least in general, for multiple reasons, including attention cannot attend to future tokens, the fact that they are existential logic, that they are really NLP and not NLU, etc...

Even proof mining and the Harrop formula have to exclude disjunction and existential quantification to stay away from intuitionist math.

IID in PAC/ML implies PEM which is also intentionally existential quantification.

This is the most gentle introduction I know of, but remember LLMs are fundamentally set shattering, and produce disjoint sets also.

We are just at reactive model based systems now, much work is needed to even approach this if it ever is even possible.

[0] https://www.cmu.edu/dietrich/philosophy/docs/tech-reports/99...

Hmm, I needed Claude 4’s help to parse your response. The critique was not too kind to your abbreviated arguments that current systems are not able to gauge the complexity of a prompt and the resources needed to address a question.

It feels like the rant of someone upset that their decades of formal logic approach to AI become a dead end.

I see this semi-regularly: futile attempts at handwaving away the obvious intelligence by some formal argument that is either irrelevant or inapplicable. Everything from thermodynamics — which applies to human brains too — to information theory.

Grey-bearded academics clinging to anything that might float to rescue their investment into ineffective approaches.

PS: This argument seems to be that LLMs “can’t think ahead” when all evidence is that they clearly can! I don’t know exactly what words I’ll be typing into this comment textbox seconds or minutes from now but I can — hopefully obviously — think intelligent thoughts and plan ahead.

PPS: The em-dashes were inserted automatically by my iPhone, not a chat bot. I assure you that I am a mostly human person.