The behavior of CPython and a few other implementations of Python (such as PyPy) is well documented and well understood. The semantics of the tiny subset of Python that this Python-to-eBPF compiler understands is not. For example, inferring from the fact that it statically compiles Python-ish AST to LLVM IR, you can have a rough idea that dynamic elements of Python semantics are unlikely to be compiled, but you cannot know exactly which elements without carefully reading the documentation or source code of the compiler. You can guess globals() or locals() won't work, maybe .__dict__ won't as well, but how about type() or isinstance()? You don't know without digging into the documentation (which may be lacking), because the subset of Python this compiler understands is rather arbitrary.
And also, having an LLM translate Python-ish pseudo code into C does not imply that you cannot examine it before putting it into a program. You can manually review it and make modifications as you want. It just reduces time spent compared with writing C code by hand.
But then we have to write the pseudocode anyway (that cannot be corrected by my IDE, so I don't know if I have pseudomistakes [sorry for the pun]), the LLM 'transpile' (that's not understood at all), and you have to review the C code anyway, so you have to know eBPF code really well.
Would that represent a time advantage?