I used the following sources to create an RFC template (and promote the document culture across the engineering documentation):

- https://www.industrialempathy.com/posts/design-docs-at-googl...

- https://github.com/rust-lang/rfcs

- https://github.com/kubernetes/enhancements/blob/master/keps/...

- https://blog.pragmaticengineer.com/rfcs-and-design-docs/

Hint: tailor the process and template structure based on your org size/maturity and needs. Don’t try to blindly mimic/imitate.

I'm broadly in favor of RFCs but they need to be dictated from top-down. That's easier said than done.

Most RFC committee debates ime devolve into firing squads in which the presenter needs to answer every question with pin-point accuracy and perfect context from the asker. Otherwise, they look unprepared and the RFC is negated.

This is allowed to happen because everybody is a theoretical co-equal in the process. Thus, everybody wants to have their say. You'd hope people would read ahead of time but there's always somebody who doesn't yet feels entitled to ask pre-emptive questions. It makes for very combative discussions.

The exception is when a double-skip manager stops that from happening and lets the presenter "make their case" and walk through the whole RFC.

Re: https://www.industrialempathy.com/posts/design-docs-at-googl...

> ... sketching out that API is usually a good idea. In most cases, however, one should withstand the temptation to copy-paste formal interface or data definitions into the doc as these are often verbose, contain unnecessary detail and quickly get out of date.

Using R Markdown (or any Turing Complete documentation system), it's possible to introduce demarcations that allow the source code snippets to be the literal source of truth:

    // DOCGEN-BEGIN:API_CLASS_NAME
    /**
     * <description>
     *
     * @param arg <description>
     * @return <description>
     */
    uint8_t method( type arg );
    // DOCGEN-ENDED:API_CLASS_NAME
Use a GPT to implement a parser for snippets in a few minutes. Then invoke the function from the living document for given a source file, such as:

    `r#
      snippets -> parse.snippets( "relative/path/to/ClassName.hpp" );
      docs -> parse.api( snippets[[ "API_CLASS_NAME" ]] );
      export.api( docs );
    `
The documentation now cannot ever go stale with respect to the source code. If the comments are too verbose, simplify and capture implementation details elsewhere (e.g., as inline comments).

In one system I helped develop, we were asked to document what messages of a standard protocol were supported. The only place this knowledge exists is in a map in the code base. So instead of copy/pasting that knowledge, we have:

    MessageMap MESSAGE_MAP = {
    // DOCGEN-BEGIN:SUPPORTED_MESSAGES
    { MessageType1, create<MessageClassName1>() },
    { MessageType2, create<MessageClassName2>() },
    ...
    // DOCGEN-ENEDED:SUPPORTED_MESSAGES
    }
And something like:

    `r#
      snippets -> parse.snippets( "relative/path/to/MessageMap.hpp" );
      df -> parse.messages( snippets[[ "SUPPORTED_MESSAGES" ]] );
      export.table( df );
    `
This snippet is parsed into an R dataframe. Another function converts dataframes into Markdown tables. Changing the map starts a pipeline that rebuilds the documentation, ensuring that the documentation is always correct with respect to the code.

If a future developer introduces an unparseable change, or files are moved, or R code breaks, the documentation build pipeline fails and someone must investigate before the change goes onto main.

Shameless self-plug: The R Markdown documentation system we use is my FOSS application, KeenWrite; however, pandoc and knitr are equally capable.

https://keenwrite.com/