That seems like it applies just fine to LLMs as well: You can replace an LLM with a different model, different prompts, etc. for the appropriate level of risk taking. Rule following is even easier, given you can sandbox them.
That seems like it applies just fine to LLMs as well: You can replace an LLM with a different model, different prompts, etc. for the appropriate level of risk taking. Rule following is even easier, given you can sandbox them.
Theres at best a handful of frontier models vs billions of people and millions of SWEs.