I think the gap is, if they're building hybrids with _forward_ AR and diffusion, they risk giving up the cool part of diffusion which is reasoning back.
I may be imposing unreasonable human biases on to this, but I really think it would be interesting to have the model engage with the structure of the text, rather than just being either a sequence or an array of tokens.
E.g. "I'm going to _ tomorrow." If the _ is not just a token but an expansion in context, which might be a noun phrase, a verb phrase etc, it could be filled in with "the mall", "practice guitar".
In code "if (_1) { return _2; }", _1 could be an expression whose type is bool, and which makes sense as a check to confirm that some process is finished. I don't care specifically how many tokens either of those is, but I do care that it makes sense in context.
I think the gap is, if they're building hybrids with _forward_ AR and diffusion, they risk giving up the cool part of diffusion which is reasoning back. I may be imposing unreasonable human biases on to this, but I really think it would be interesting to have the model engage with the structure of the text, rather than just being either a sequence or an array of tokens. E.g. "I'm going to _ tomorrow." If the _ is not just a token but an expansion in context, which might be a noun phrase, a verb phrase etc, it could be filled in with "the mall", "practice guitar". In code "if (_1) { return _2; }", _1 could be an expression whose type is bool, and which makes sense as a check to confirm that some process is finished. I don't care specifically how many tokens either of those is, but I do care that it makes sense in context.
I was thining of something like LLaDa that uses a Transformer to predict forward masked tokens:
https://arxiv.org/abs/2502.09992