You’re implying that if someone uses AI to write something, the person doesn’t then read it/iterate on it to ensure correctness. Serious “get off my lawn” vibes here.
the person doesn’t then read it/iterate on it to ensure correctness.
As someone who has had to deal with drive-by PRs on open-source projects, which were a problem before but have now gotten much worse in volume as they are mostly AI-generated, yes.
Except, no it doesn’t.
In the case of a pull request, I am not about to trust some LLM that has no business context and can only pretend to guess at the “why” of a change.
To understand the “what” of a change, you have to actually read the code. This doesn’t belong in the pull request description most of the time.
You’re implying that if someone uses AI to write something, the person doesn’t then read it/iterate on it to ensure correctness. Serious “get off my lawn” vibes here.
the person doesn’t then read it/iterate on it to ensure correctness.
As someone who has had to deal with drive-by PRs on open-source projects, which were a problem before but have now gotten much worse in volume as they are mostly AI-generated, yes.