Every time I see something about trying to control an LLM by sending instructions to the LLM, I wonder: have we really learned nothing of the pitfalls of in-band signaling since the days of phreaking?

Sure but the exploit here isn’t prompt injection, it is an edge case in their billing that isn’t attributing agent calls correctly.

That's fair - I suppose the agent is making a call with a model parameter that isn't being attributed, as you say.

It reminds me of when I used to write lisp, where code is data. You can abuse reflection (and macros) to great effect, but you never feel safe.

See also: string interpolation and SQL injection, (unhygienic) C macros

Allowing phreaking was an intentional decision, because otherwise they could have carried half as many channels on each link.

It'll be a sad day for Little Bobby Tables if in-band signaling ever goes out of fashion.