Some LLM APIs let you give a schema or regex for the answer. I think it works because LLMs give a probability for every possible next token, and you can filter that list by what the schema/regex allows next.
Some LLM APIs let you give a schema or regex for the answer. I think it works because LLMs give a probability for every possible next token, and you can filter that list by what the schema/regex allows next.
Interestingly, that gives a different response distribution from simply regenerating while the output doesn't match the schema.
This is true, but there are methods to greatly reduce the effect of this and generate results that match or even improve overall output accuracy:
e.g. DOMINO https://arxiv.org/html/2403.06988v1
It sounds like they are describing a regex filter being applied to the model's beam search. LLMs generate the most probable words, but they are frequently tracking several candidate phrases at a time and revising their combined probability. It lets them self correct if a high probability word leads to a low probability phrase.
I think they are saying that if highest probability phrase fails the regex, the LLM is able to substitute the next most likely candidate.
You're actually applying a grammar to the token. If you're outputting, for example, JSON, you know what characters are valid next (because of the grammar), so you just filter out the tokens that don't fit the grammar.