We explored this exact idea in our recent paper https://arxiv.org/abs/2505.22906

Turns out this kind of UI is not only useful to spot bugs, but also allows users to discover implementation choices and design decisions that are obscured by traditional assistant interfaces.

Very exciting research direction!

Very exciting indeed. I will definitely do a deep dive into this paper, as my current work is exploring layers of affordances such as these in workflows beyond coding.

I've wanted someone to write an extension utilising this idea since GPT-3 came out. Is it available to use anywhere?