why? I suspect that writing code itself is extremely token efficient (unless like your keywords happen to be silly, super-long alien text).

Like which do you think is more token-efficient?

1)

     <tool-call write_code "my_function(my_variable)"/>
2)

    <tool-call available_functions/>

    resp: 
         <option> my_function </option>
         <option> your_function </option>
         <option> some_other_function </option>
         <option> kernel_function1 </option>
         <option> kernel_function2 </option>
         <option> imported_function1 </option>
         <option> imported_function2 </option>
         <option> ... </option>
     <tool-call write_function_call "my_function"/>
     resp:
         <option> my_variable </option>
         <option> other_variable_of_same_type </option>
     <tool-call write_variable "my_variable"/>

Not sure I follow. You seem to have omitted the part of 1) explaining how the LLM knew that my_function even existed - presumably, it read the entire file to discover that, which is way more input tokens than your hypothetical available_functions response.

Reading files is not that input tokien heavy, I suspect. But anyways I omitted it because presumably it would have done so to gain local context in general.