I like the idea of AI usage comes down to a measurement of "tolerances". With enough specificity, LLMs will 100% return what you want. The goal is to find the happy tolerance between "acceptable" and "I did it myself" via prompts.
I like the idea of AI usage comes down to a measurement of "tolerances". With enough specificity, LLMs will 100% return what you want. The goal is to find the happy tolerance between "acceptable" and "I did it myself" via prompts.
> With enough specificity, LLMs will 100% return what you want.
By now I’m sure it won’t. Even if you provide the expected code verbatim, LLMs might go on a side quest to “improve” something.