This change is happening as we speak. My work repo is now sprouting with markdowns primarily intended for agentic consumption, but it’s all dense jargon communicating requirements and important design decisions in a way which is useful to me as a software engineer. The toil of writing these specs from low level code is much lower today, because LLMs also help with that immensely.

In my workplace happens this, but in a bad way. There's a push to make use of AI as much as we can to "boost productivity", and the one thing people don't want to do is write documentation. So what ends up happening is that we end up with a bunch of AI documentation that other AIs consume but humans have a harder time following because of the volume of fluff and AI-isms. Shitty documentation still exists and can be worse than before...

Other than humans getting apoplectic at the word "delve" and — emdashes, can you explain and give some examples or say more about how AI-isms hurt readability?

Having encountered this spread across our orgs greenfield codebases which made heavy use of AI in the last 90 days: Restating the same information in slightly different formats, with slightly different levels of detail in several places, in a way that is unnecessary. Like a "get up and running quickly" guide in the documentation which has far more detail than the section it's supposed to be summarizing. Jarringly inconsistent ways of providing information within a given section (a list of endpoints and their purposes, followed by a table of other endpoints, followed by another list of endpoints). Unnecessary bulleted lists all over the places which could read more clearly as single sentences or a short paragraph. Disembodied documentation files nested in the repos that restate the contents of the README, but in a slightly different format/voice. Thousands of single line code comments that just restate what is already clear to the reader if they just read the line it's commenting on. That's before getting into any code quality issues themselves.

I've noticed AI generated docs frequently contain bulleted or numbered lists of trivialities, like file names - AI loves describing "architecture" by listing files with a 5 word summary of what they do which is probably not much more informative than the file name. Superficially it looks like it might be useful, but it doesn't contribute any actually useful context and has very low information density.

[dead]

A piece of information, or the answer to a question, could exist in the documentation but is not in a format that's easily readable to humans. You ask the AI to add certain information, and it responds with "I already added it". But the AI doesn't "read" documents the way humans read.

For instance, say you need urgent actions from other teams. To this effect you order an AI to write a document and you give it information. The AI produces a document following it's own standard document format with the characteristic AI fluff. But this won't work well, because upon seeing the urgent call for action the teams will rush to understand what they need to do, and they will be greeted by a corporate-pr-sounding document that does not address their urgent needs first and foremost.

Yes, you could tell the AI how to make the document little by little... but at that point you might as well write it manually.

There is a place for informal, prose specs, and I can easily agree that more people are nowadays describing their programs in English.

The context here is formal specs though - adequately and precisely capturing the intended meaning (semantics) in a way that lends itself to formal verification.

Interactive theorem proving is interactive because proof search is intractable; a human articulates the property they want to prove and then performs the "search". Apart from the difficulties of getting to that proof, it can also happen that all goes through and you realize that the property is not exactly what you wanted.

I’m curious if your markdowns are discernible by other engineers? Or are they purely only discernible by you and an LLM?