The comments here turned out much more interesting than I expected—this has become a great place to discuss the difference between AI-generated, AI-written, and AI-assisted content.

So let me start from @jbarrow's comment: "AI written, generated from the codebase."

My actual learning process looked like this:

1. I walked through the nano-vLLM codebase, asking Claude Code some high-level questions to warm up. 2. Then I asked detailed questions one by one, let it explore, and double-checked the code myself. As someone without an ML background, it sometimes took hours to understand a single concept. 3. Once I felt I understood enough, I started drawing Excalidraw diagrams to explain what I learned.

Does this count as "generated from the codebase"? I don't think so.

Where we might disagree is the writing process.

As a non-native English speaker, my workflow looks like this:

1. Write a short paragraph (<100 words), then ask my writing agent to "fix this for readability and grammar." 2. Review the output. *If it changes any technical meaning, I correct it.* I consider this a responsible way to write a tech blog. 3. Move to the next paragraph.

Is this "AI-written"? I'd call it "AI-assisted." Every idea in every sentence is mine. Honestly, things like "em dashes" never stood out to me when reviewing. I suspect that's common for non-native speakers.

I wrote this comment the same way. The LLM fixed 14 grammar mistakes that I think would distract readers more than any LLM-ish phrasing.

That said, I'm open to suggestions on how to improve my writing process :)

When text is (clearly) non native English I think most native readers don’t even register grammar errors.

To be honest most native readers wouldn’t register grammar errors full stop.

I guess I have more awe of people who speak a foreign language at all compared to piping it through some agent malarkey.

> I wrote this comment the same way. The LLM fixed 14 grammar mistakes that I think would distract readers more than any LLM-ish phrasing.

I don't think that assumption is correct. As you can see by the discussion we're having here, the LLM "fixed" text is actually quite distracting, while text written by a reasonably proficient non-native speaker is generally perfectly readable. It's only if your English is extremely poor to non-existant that it makes more sense to use machine translation or editing rather than writing it yourself.

One problem is that people are becoming quite sensitive to slop, where people just post completely unreviewed, AI generated text. It's quite frustrating, because it's asking readers to read something that no one has ever bothered to write, and it frequently crowds out discussion that people are more interested in. So everyone is kind of hyper-sensitive to signs of AI written text right now, which means when you start to see such signs, your brain moves over to trying to interpret whether it's AI generated rather than reading the text itself.