Can't recommend this approach highly enough: have someone with minimal expertise go through your docs with the goal of achieving the goal of the docs. Sit next to them or screenshare. Do not speak to them, certainly do not help, just watch. Watch them fumble. Watch them not know what to do. Watch them experience things you (the author) didn't, because you already had xyz configured on your machine and you forgot users won't have it. (even watch them pretend to know what they're supposed to do when they don't really).

If the user achieves what they need with minimal stress/guesswork/ambiguity, the docs pass. If not, note every single place they fail, address each one, and repeat with a new user.

I've used FAANG docs that don't come close to passing the above criteria.

I've been incredibly grateful my org set this high bar. Especially when using docs for critical tech I only use from time to time (where I forget lots of it). Saves meetings, support inquiries, and video calls, because the user can self-serve.

I seem to have this problem a lot with Apple’s docs. So much of it is like

    Nargflargler: Flargles the narg
You need to do something besides repeat the name in the definition.

This is just one example of how metrics can distort things, of course. Someone in management said "We want 100% documentation coverage of every method," so the staff dutifully wasted everyone's time by writing "setDefaultOptions: sets the default options". It's the kind of thing an LLM could have done better, and if you know my opinion of LLM's, you'll know that's damning with faint praise.

My own bete noire here is MSDN. It's full of overloads like "Foo(string parameter, FooOptions options) - actually useful documentation. Foo(string parameter) - does Foo with default options." But to find out what the default options actually are, you have to find another page, probably the FooOptions constructor. I wanted the default options to be mentioned on the "Foo(string parameter)" page, and they so rarely are. (A few pages are better, thankfully).

And then there is Microsoft's annoying habit of creating APIs which return the information you actually need . . . nested three levels deep inside a bunch of their own custom data structures.

I've basically resigned myself to "it makes sense in Redmond somehow, even if it makes no sense to me."

Microsoft's APIs basically shove all the implementation details onto the API user. This is, of course, abysmal API design, but "tasteful design" (in any sense) and "Microsoft" have never been in the same building. But it does make sense. And it also tells you how to interact with Microsoft APIs: the same way you interact with the hardware details that assembly languages export to the user, namely through a wrapper. (But, taste is difficult to find; that wrapper might have imbibed some Microsoft "design" by virtue of being exposed to too much Microsoft.)

Rant: if you want antialiased text, you need to use Direct2D. Direct2D is one of those APIs that waste developer's lives. You have to allocate your own framebuffer, for crying out loud. And then, you have to reallocate it if it ever disappears for some reason, and the docs don't tell you when this might happen (hot swap a video card? change monitor resolution? User moves window to a monitor with a different video card?).

I found this out developing a cross-platform UI library, https://github.com/eightbrains/uitk, leading to my above conclusion that the only proper way to relate to the Microsoft API is through some layer.

> But to find out what the default options actually are, you have to find another page, probably the FooOptions constructor. I wanted the default options to be mentioned on the "Foo(string parameter)" page, and they so rarely are.

It's better for maintenance (of the documentation) if the default options are only described in one place. (If the defaults change in a new version, this ensures the documentation doesn't have inconsistent, wrong descriptions. The analogous reasoning, applied to the code, is probably part of why the FooOptions class exists in the first place, after all.) But they should do you the courtesy of linking there.

This is only a problem if you write it twice. Instead you can write it once and display it twice.

Hell, I even do this on my github.io website that uses markdown. You can just write some text in one document and read it in another.

We're programmers, so we should be lazy. It's about being the right lazy. You can be lazy by putting of a task today that takes more effort tomorrow or you can be lazy by doing a task today that takes less work than it would take to do tomorrow. Most people choose the former and wonder why they have so much work. In programming if you're doing redundant work then you're probably being the first type of lazy

In code documentation doesn't support such thing. And documentation outside of code suffers from rot.

Some varieties of in-code documentation do support links, e.g. XmlDoc which is the de facto standard for documenting C# code (and therefore the most relevant to my comments about MSDN because I was referring specifially to the .NET API documentation) has multiple ways of embedding links in your in-code documentation comments: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...

MSDN even uses those, a lot. But not enough. I wish that every time they had a "Foo(string parameter) - uses the default FooOptions" it was a link to the documentation section where the default FooOptions are listed. But usually you're left to find the default FooOptions yourself, which means 5-10 minutes of digging through docs (1-2 minutes if you're lucky) that I could have spent writing or reviewing code instead. That adds up.

We are talking about MSDN not some source files. Even if those pages are generated from in-code documentation that generation step can use whatever transclusion mechanisms Microsoft wants to add.

But that could be linked up rather than have you fumble through to find them.

In some fairness, the page existing at all is half the battle. I'm glad the canvas exists for the paint to eventually, maybe, one day arrive.

Related to this is the omitting of units. I encountered something like this in the Android SDK (years ago, dunno if it’s still like this).

    setFontSize(float): sets the font size.
Cool. Sets the font size in what? Points? Pixels? Device-independent pixels? Which of the 12 different types of measurement Android supports is used here? I can’t remember exactly what it turned out to be, but I know it wasn’t the unit I expect for fonts (points).

In similar cases (maybe not exactly here), I suspect the author also didn't know and didn't care to look it up and just wanted to tick the box that it's now documented.

This is why I rage against the crowd that promotes "self documenting code". There's no such thing, even if you should strive to make your code as readable as possible. But if there's a way to misinterpret it then you can bet many people will.

The biggest problem is that this ends up creating so much extra work. An extra 2 seconds from the dev could save hundreds or even thousands of people hours of work. I can't tell you how many hours I've spent chasing stupid shit like your example. I don't know a single programmer who hasn't.

I just don't understand why everyone's frustration with documentation (or lack of) doesn't make obvious the importance of good documentation. Every single one of us has experienced the wasted time and effort that results from the lack of documentation or from low quality docs. Every single one of us has also reaped the benefits of good documentation and seen how much faster it makes us. How does anyone end up convincing themselves that documentation is a waste of time? It feels insane

This kind of bad documentation is actually way more common in teams that require doc comments for all code, which are then promptly auto-generated by the IDE and never filled with actually useful information.

Self documenting code in this case would mean using a type that encodes the unit - which would have the additional benefit that the compiler or other tools can now check correct usage.

You're misinterpreting

Requiring docs isn't the cause of the problem. It's the lack of enforcing quality. The difference is that you're looking at the metric and seeing Goodharts Law in action while there's nothing preventing you from going beyond the metric. That's the real issue is that metrics only take you so far. No metric can be perfectly aligned so it's up to the people who are evaluating the metrics to determine if the letter of the law is being followed or the spirit of it is. If you do the latter then yeah, maybe some functions will be left without docs but you also won't hasn't those tautological docs either. If you only care about the letter of the law then you should expect the laziest bullshit as Goodharts Law always wins out.

Stop reading too much into metrics. Metrics are only guides

Code can show you HOW something is done. Only documentation can explain WHY it is done that way.

That's more about API design than about documentation though, as with a proper function name/using value objects/something else, you already know what the correct value to pass is.

It's a widespread issue though, where the API designer doesn't clearly communicate either what the thing does and/or what the thing needs.

If you don't need the docs then you don't need them, but sometimes we all need a "hey bro, I know you're a little lost so I'm going to break down what's happening in plain English". At a certain point you just don't have the entire code base in your head all the time and you need a reminder on what exactly the Flargle team does to all the Nargs.

This is why I'm sad that hungarian notation has gained into such a bad reputation. Sure, you can overdo it, but a `duration_ms` or a `response.size_bytes` or a `max_memory_mb`, or an `overhead_ns` is so much easier to use.

Better yet would be unit-aware types. Then instead of

duration_ms = 1000

you can have

duration = 1s // or duration = Seconds(1) in deficient languages

and it's either a compile error or the type system enforces the correct conversion.

As for the bad rap of hungarian notation, it's mostly from people using it to encode something that is already clear from the types. "fDuration" helps no one over just "duration".

AVMetadataKeySpace

A structure that defines a metadata key space.

source: https://developer.apple.com/documentation/avfoundation/avmet...

That’s just a C enum interfaced in Swift. You can’t instantiate it, and it has no methods or any kind of functionality. It’s effective a list of numbers.

What are you expecting the documentation to say here? It will make more sense when you find where it’s used.

Edit: First link on the bottom explains exactly what it’s used for. https://developer.apple.com/documentation/avfoundation/retri...

struct AVMetadataKeySpace - a unique unit representing each of the metadata key spaces supported by AVFoundation.

? Did you read the link? It’s used to query collections of keys grouped by the KeySpace categories, instead of a single item per key. Makes sense to me.

There’s plenty of other poorly documented Apple APIs (io_surface), but this isn’t one of them.

> It’s used to query collections of keys grouped by the KeySpace categories

Sounds like something that should be mentioned in the opening sentence of https://developer.apple.com/documentation/avfoundation/avmet...

The struct is only named on the link you provided, not documented. So thanks for showing the absolute irony of it not being greatly documented, allowing people to misinterpret what it means.

Because it’s a boring enum in C, auto translated to a swift struct.

And if you’re reading the documentation because you do development, then you would already know that the header files are installed on your computer and you can trivially verify that there is nothing to document because it’s just a query key.

Enums get documented everywhere else. If nothing else, you need the range of options!

Going off to read the header file means it isn't documented.

Not quite what you're talking about but this Apple doc page has always amused me: https://developer.apple.com/documentation/contacts/cnlabelco...

I have to assume that there exists some language where that relationship is described in one word, but it hurts my English-oriented brain.

There are indeed languages that don't have the word "cousin" -- or "uncle" or "aunt".

And conversely, there are languages with different words for "father's sister" and "mother's sister", and for male vs female cousins, etc.

And we don't even have to get exotic for that. My language, Danish, is just a run-of-the-mill Germanic language and those terms are "faster", "moster", "fætter", and "kusine".

Some of the East Asian languages are crazy regarding terms for family members. It's like learning foreign words for plants: I just give up. I will not even attempt to learn them.

There are also languages where the relative age difference changes how you address a relative. Like if your father is older or younger than their sibling, the way your address that uncle or aunt changes. There is another way you address them if they are the oldest or youngest uncle/aunt. Similar but slightly different on the mothers side.

But I would bet that those variable labels are never translated into other languages.

Presumably those enums are used to select localized labels and you need all these cases to cover unique words / phrases that exist in the supported languages for specific familiar relations.

Or with Xcode, go to fargler settings click on narg screen. Took a year just to figure out most setting screens

> with Xcode, go to fargler settings click on narg screen

I hate how this looks "accessible" to people in theory, but in reality finding those screens is more like playing a hidden object game.

Also, I hate how those things keep changing around in all kinds of software, but especially Apple. Somebody probably thinks "yeah maybe we should move the Fargler settings from the Narg to the Birp screen", and makes dozens of internet "documentation" (and sometimes their own!) obsolete.

Apple documentation reminds me of an argument I got in with an elementary school teacher over a textbook… it went on for weeks

> A prepositional phrase is a phrase with a preposition in it.

> A preposition is a word in a prepositional phrase.

One problem I remember from (briefly, fortunately) dealing with Apple APIs is wondering incessantly why every API (I was looking at) started with NS. Admittedly these days any AI would tell me it stands for Next Step. But if you are creating a new thing with a quirk like this please explain it once, in a place that's easy for the student to find.

The more useful answer is:

a) it needs namespaces

b) but giving people namespaces is unironically bad because it's what lead to "enterprise development" style APIs like C# where everything is named System.DataStructures.Collections.Arrays.Lists.ArrayList, as if giving something a longer name makes it more professional.

c) so instead two letters means a system framework and three letters means a user framework

I quite like a terse but consistent conventions myself. I remember finally being able to quiet the tedious part of my brain that couldn't get past the NS conundrum when I finally came up with the NextStep thing as a reasonable theory.

In other words, my only complaint is that this Apple convention is not more easily discoverable. Or perhaps that the expert author of the book I was reading (this was back in the day) didn't feel the need to share it with his readers.

Triangle theTriangle = new Triangle()

Lives rent free in my brain.

[deleted]

I want a linter against this. I have a hatred for those kinds of docs, they take up screen space, its worse than nothing.

The issue here is that people are treating reference materials as tutorials intended to cover your exact concern at the moment. You are expected to know what a narg is and what flargling means. In more real terms, the documentation for screen savers https://developer.apple.com/documentation/screensaver?langua... won't explain what a view is, what subclassing is, or what a Rect is. Those are required knowledge to consume the documentation and it's not a documentation failure that this is true.

No, you missed the point. The problem isn't "narg" or "flargling" - those are just random stand-ins for normal words. Instead the problem is that the description says nothing that isn't already said by the symbol name. Whether or now you know what "narg" and "flargling" mean, a documentation page for Nargflargler that just describes it as "Flargles the narg" provides zero additional information to you.

then how would you descirbe a nargfargler?

dont say.. boop?!

God I hate this so much when I google some unknown word and it's just: "Nargflargler: When someone narg flargles something"

> Do not speak to them, certainly do not help, just watch.

Sounds simple, right?

I ran usability tests at a past company and have seen people who were incapable of blurting out explanations, pointing at the screen, even audibly grunting or whining to themselves when the participant made an incorrect guess about what something meant. One even grabbed the mouse.

Having a neutral moderator can help as it allows the people who made the UI/docs to stay on mute or on the other side of one-way mirror.

But I'd still suggest learning the "just watch" technique. If you master that and wish to take the next step, look up "think-aloud protocol".

I mean, if the test user can't figure it out at all, how is the rest of the UI/documentation supposed to get evaluated?

Great question!

If you let someone flounder on one task indefinitely then you don't learn anything about subsequent tasks. But if you correct them too quickly you won't uncover the other approaches they would have tried to complete the task. Most research plans define cutoffs such as:

1. Participant expresses extreme frustration or gives up

2. A couple minutes have elapsed from the first failed attempt

3. Participant unsuccessfully attempts three distinct approaches

If the test reaches one of your cutoffs then the interface/docs have failed the task and the moderator can skip to the next task or question. Sometimes they'll also offer to show the participant the expected solution or explanation.

Exactly. You want to learn as much as possible from each study. Explaining too soon reduces amount learned, as does ending the study early because a small hint wasn't provided to get to the next step.

> You can also record it to show them later, but for various reasons it doesn't resonate quite as strongly when it's not live.

Yeah, because it's wasting my time having to watch people who literally have never heard of something as basic as keyboard shortcuts. It's fine if I actually have the time to explain to some Gen Z kid how Ctrl+X/C/V works, but being forced to sit around and watch someone with that level of non-understanding of how a computer works when I got a full backlog of shit to do is just agonizing.

With a video recording, I can at least go forward and see where they actually have problems with stuff that is in my influence and skip over the utterly boring moments that are just wasting my already limited time.

Before I saw your response I removed this sentence from my post as I realized it was not central to my main point. However, I still agree with it and am happy to explain why.

> wasting my time having to watch people who literally have never heard of something as basic as keyboard shortcuts

First it depends on whether the audience for your product includes people who do not know keyboard shortcuts. If that's not your target audience then the rest of the test may not be valid anyway.

Otherwise, there is utility in forcing yourself to watch your users struggle with your product. The best product developers/owners I know have a bottomless appetite for observing people use their product, even if doing so means deferring the rest of their "full backlog of shit". Perhaps they're less efficient in the short term at churning out lines of code, but the understanding and empathy they develop makes them significantly more effective in the long term.

It's like how expert athletes often watch videos of themselves or competitors (when applicable) to understand the nuances of their play - once you understand something very deeply the small things start to matter more, until they dominate the game.

If you are a master of UI/UX and you are observing a user doesn't go through the paths you've created its an opportunity - you might be able to learn something that would make your approach more successful across a host of different users that up to this point you clearly are not winning the game against.

If you take an antagonistic approach and curse the idiot for making you watch you have not even put on a jersey yet.

This is our documentation workflow as well: Write it, and then have someone less or not experienced with the system execute the runbook. Also, encourage everyone to work on refining and improving the docs, because after 5 years with a system, I will have blind spots someone less experienced can point out.

On lesson I've learned from that: It's a lot about managing confidence of the user.

To do this, the instruction of "invoke this shell command" is now usually accompanied with a number of sections collapsed by default: How does a successful invocation look like - especially if it contains "ignorable warnings"? What errors could occur, and are they fatal or can they be fixed on the fly? Some more complex shell-stuff is often also accompanied by an explanation of what all of this is.

And yes, this means that sometimes one step in a runbook has a page of documentation if you read it all. But we've found that this helps a lot during onboarding new team members, as by now, a lot of the standard runbooks are also a prety good introduction to the quirks and quarrels of the normal tools we use.

A good first exercise for new hires! (And I say that as having been both a new hire who's updated the documentation after trying to execute it, and as someone who's guided a new hire when the documentation proved inadequate.)

Any kind of documentation has a target audience. Your test is very valuable if and only if the target audience is a total beginner. Of course it's still very hard to write good documentation even if you have identified your target, but having someone totally illiterate on the subject matter review your documentation is as useful as if I'd have to review a PhD thesis in quantum physics. It just doesn't make sense (trust me :).

Writing documentation is hard. Start with: Who am I writing this for?

edit: I may have misunderstood OP's "with minimal expertise" for "total beginner". They're two different things, absolutely.

For most public documentation, you don't get to pick your audience. You think you'll have people with certain experience, but then it turns out you're wrong. Usually a lot of the time. And even when you're not wrong, having the steps essentially from scratch listed out reduces the number of times people get stuck, because they think about things they may have missed.

I cannot tell you how many times I've had to go through 30 hyperlinked pages of fluff explaining universal basic concepts before finding the five sentences I actually needed (buried in five different places).

And just as many where people explain in detail exactly how to do foo with bar without explaining why I would want to do foo in the first place and what a bar even is.

Way too much documentation is like this. Then again, lots of times asking coworkers about an existing system or a new ticket that's not detailed properly ends up with them saying 30 pages of fluff to me before I can get to the nugget

This really is one of those things that AI can improve, and already improves today.

As much as I'm not an AI booster, it has helped a lot when I hit a wall with poorly done documentation where the related bits I need are scattered all over and even a text search isn't helping me

I was gonna write something similar. Know the audience. I've also come to the conclusion that "total beginners" (and certainly "minimal expertisers") didn't nowhere to read the docs anyway so it didn't matter.

In other words, people who are used to reading docs can read (good) docs just fine.

Yes, of course, good docs are a must. They are critical to success. But not all docs have to explain how to use a right-mouse button.

Perhaps in addition to a description of the expected audience, it might be an idea to list some assumptions made about the reader? e.g. has installed software previously, confident with bash commands, &c

I almost systematically use BLUF (Bottom Line Up-front) when I write docs, I think I'll make TABLUF a thing from now on (Target Audience and Bottom Line Upfront) :)

The experts are likely to be skimming and interpolating your doc, so they'll get through it but you won't know why. You won't know if your doc works, or if it even addresses the subject matter. This is also true of academic papers.

My mom taught CS in the 1980s, and told her students on day one: "Computers are stupid, they will only do exactly what you tell them to do, not what you want them to do." Program code is, in a sense, a tutorial for an utter beginner. The benefit of coding is that you can do the "beginner test" over and over without wasting anybody's time, so you know that the computer will get through it. But an expert (including yourself) might read that code and never see that it does or doesn't work.

It's crazy how bad most onboarding docs are for corporate teams. I think it's a great first look the culture and how much of a hassle the role will be. The last three teams I've joined have been brutal with how little was documented or how out of date the docs that did exist are. I've had to spend up to two weeks tracking people down to find out what access group I need for our logs, deploy pipeline, etc. and I end up writing up a new doc that's good for its point in time, immediately becomes out of date when someone adds a new system or access group but doesn't document it anywhere. The one team I was on previously that got me everything I needed in about two days was great, but it's sad that this isn't the norm. Everywhere else has been pretty hostile to getting set up, and the poor onboarding experience has been a preview of the developer experience. My current role is standing up a new devex team which I'm hoping turns the tide here.

It's not very crazy to me. Most corporate teams are overrun with feature creep that "is very simple" (i.e. it takes 3x as long as estimated, because the codebase is a mixture of overengineered spaghetti for that one customer with edge-case requirements and legacy, combined with tests that are meant to be run in a jenkins job which takes 4h to complete).

Then, the engineers are expected to write the docs in between these tickets, and doc is seen as something "to be done within 30 minutes" - of course the docs will be comically (or tragically, depending on your perspective) bad.

Most people have 0 idea on how to write good docs, so in 30 minutes, they write stream-of-consciousness docs and return back to the ticket hell.

Most places I've been could have been upgraded with stream of consciousness. It's not surprising that they aren't all perfect, and the one place that was done to a very high standard was properly overdone, but at most places whatever counts as onboarding docs either doesn't exist, is essentially unusable, or directs me to legacy things that on day one I don't know enough to not bother with

if you're writing a new doc to "fix" this situation, you're commiting three crimes: 1. all that old documentation still exists, misleading and confusing people. you've now made the problem n+1, 2. there's no strategy to keep your new document from turning into an old, stale & out-of-date document for the next person, 3. you've addressed the wrong problem (nothing's documented!) and feel like you're superior to all the jerks who came before you.

>> My current role is standing up a new devex team which I'm hoping turns the tide here.

I'd love to know what you're doing different that can help with this problem. Writing more, new documentation is unlikely to be it.

You're right that it's not a complete solution. The overall process on this team aren't good (we never do a retrospective, ever) and I don't get to decide how we solve #2 and #3. The best I can do is bring things up to date, keep it up to date as I run into new info or we add new systems to access, and hope that future new hires are smart enough to check created and last modified dates on documents to find the most recent one.

Sometimes I wonder if it's a respect or control issue. I once worked in a non-technical position that interfaced with a complex order management system. We were given zero access to documentation and had to rely on trial-and-error and the reverse-engineered model held in the head of one specific supervisor. I'm almost certain that certain errors that appeared over and over were caused by us temporarily clearing previous ones incorrectly. This was especially frustrating because we were 2nd shift, so dealing with those errors could mean the difference between getting home that night or getting home the next morning. It was hard to tell where along the line between, "They're not sophisticated enough to make use of them," and, "We don't want our processes leaking," we fell, according to the higher ups.

My mother worked in engineering back in the late 80s until early 2000s and always told me about people who didn't document things because they wanted to be un-fireable. I didn't believe her or take it too seriously until some of these more recent teams, but I think it is a lot more common than it should be.

A technical writer's first task is to start the document that onboards the next technical writer.

Reading through bad setup docs is 10x more stressful when they are part of new employee onboarding.

I’ve always advocated for new employees first contributions to be fixing problems they had in these setup materials. They are coming in with fresh eyes and no context so they are the best possible reviewer

My first ever software developer job, I was hired with basically no knowledge or experience to learn (I was very lucky). I knew MS-DOS command line pretty well from my childhood, but hadn't ever used POSIX. I was given a macbook air and some docs to follow.

Trying to follow the docs, supplementing with a lot of googling, I somehow managed to remove the tar program from my system. This broke literally everything. Had to stop halfway through the multi-day process to do a clean reset and start over from scratch.

We called this the “receptionist” test decades ago at the small company I was at - after we though we were done we’d give it all to the receptionist and ask her to use it; and we’d hang our head in shame at everything we forgot and head back.

There’s a version for kids to show the details of how to program by literally interpreting steps. https://youtube.com/watch?v=n4rh2jD8OkY

I worked with someone who was great with this. They’d go through the docs and do exactly what was said, document where problems were hit and then repeat from scratch again and again. Seemed slow but their docs were excellent and I’m sure it saved more time having him hit each thing once than everyone else hitting them loads.

I am that guy. I will also say from experience: It does not pay. Never once has it been ack'ed in a year end review (which controls bonus, salary increase, and promotions). As soon as a manager sees you as "The Wiki Guy", they take you for granted. As I grow older and more cynical, my view on internal docs: (1) Write them for yourself. (2) Write them to make people go away when they ask you questions ("Did you search the Wiki?").

I had this issue during a job interview exercise. Their "follow these steps exactly" were simply broken. The root cause was that they were having people re-use the same shared remote amazon desktop system. Each candidate got their own home directory, but they wouldn't just reset the image between candidates. The person before me had used up 98% of the drive space. When I followed the 'step by step' guide, nothing worked, because it was out of drive space, but... I wasn't seeing 'out of drive space' messages directly - I was seeing their 'setup shell scripts' looking like they worked, but then nothing did.

I honestly thought this was some sort of trick exercise to see how I deal with broken processes, and I was writing fixes to their docs and shell scripts to deal with error states, and reported back to the person. I initially got a 'no, this isn't that sort of test. the docs work, just follow them'. After more back and forth, I got 'oh, I see that might be broken, yeah, just carry on'. I fixed what I could, made a couple commits back up, but was then told my commits needed more context, which I then added, and promptly never heard back from them again. Until... weeks later, HR reached out to say "we've gone with someone else". I recounted this story and got at least some semblance of feigned shock of 'that's not how any of this is supposed to go'. I'd kept some screenshots and emails, but they didn't care to go down that road.

tldr - Employers giving tests, please run through your own exercise processes now and then (or maybe even automate them with some smoke tests).

Funny enough, we had a hell of a time running a helpdesk where we designed the docs -- many of which I wrote myself -- to be executed exactly as written.

Guess what humans hate to do? Especially the smart ones, which of course you want to employ on your helpdesk? They just would not read the damned instructions.

I think this was because many of the instructions were dumb. We were explaining decades-old bank stuff. It didn't make sense, but it's what you had to do! So these guys tried to 'fix' it, and in doing so, broke it.

The whole support model was predicated on this idea that the 3rd level guys would write stuff that the 1st level guys would slavishly follow. It never worked.

You could probably fix this, to some extent, by adding a sidebar to the instructions that 1) acknowledges that the procedure doesn't seem to make any sense, and 2) points out why the seemingly obvious fixes won't work. That's usually immensely helpful to me as a reader, so I don't have to waste time wondering if I misunderstood the instructions or the author misunderstood the procedure.

Joel Spolsky famously wrote in the year 2000:

    > Users don’t have the manual, and if they did, they wouldn’t read it.

Maybe being able to follow a set of (seemingly silly) instructions should be part of the interview/onboarding process. And emphasised at job performance time.

Problem is a lot of times silly instructions are silly because they are wrong. Like why did you turn left and try to drive through that river? Those instructions assumed a bridge was there but it washed away 10 years ago. A new bridge exists, you can see it, obviously take that one instead.

You can achieve a lot of this by creating a blank virtual machine with "just the operating system" as a starting point and stepping through your own instructions from there.

My ideal state is that for my kind of .NET work, it should be sufficient to simply install the latest Visual Studio, check out the Git repo, and press "play".

That's not always possible, so then the exercise becomes to simply document each step, ideally with both English words and a CLI snippet.

I agree that testing from a vanilla machine is important.

But there's also that your language to the user doesn't necessarily say what you think it does. You can't read it from the position of someone new. Only someone new can.

And a set of commands to paste to CLI isn't the full extent of what we usually mean by documentation.

Yes, more of this!

I am a big fan of the "clone, F5" and it should run. If specific steps are required, I put that in a setup.ps1, and the details in the readme.md.

If the project has external requirements, I put a link to the repos, which should all be... "clone, F5"...

When I type F5, my terminal writes "~" but nothing happens, what did I miss?

In case you weren't attempting to make a point through irony, GP appears to be using "F5" informally as shorthand for "instruct your IDE to attempt to build and run the code". Presumably, that kind of documentation wouldn't normally literally say "F5" there unless a specific IDE had already been prescribed. The point was simply that the user shouldn't be required to do anything manual to set up the code, when starting from scratch, except perhaps to authorize the automated setup procedure.

Indeed, snapshots are an amazing friend for this.

Or let the Junior rewrite the docs while they're scratching their head, and push an update once they've figured it out.

I'm a senior designer who often contributes to front-end code when it's convenient for my client.

Fixing and updating the README when I join a new team and set up their dev environment is always extremely well-received.

If i'm gonna untangle something, i may as-well write some notes. If i'm writing notes on it already, i may as-well refine the grammar a bit and update the docs. It's really quite small effort compared to the main work of learning the system, so i don't quite get why so few people do it.

Wow, way to double down on “I really hate everyone who doesn’t have exactly my skill set and experience.”

I'm ... confused what you mean. If the junior is gonna untangle the docs anyway, why not make them directly update the parts that confused them once they're through it.

They're not necessarily prohibited from asking questions if they're stuck, though. But also search in the chat channels for similar issues.

Updating docs in source control also onboards folks to code review. It would be weird to update docs and get a hostile reception.

While nice to walk through with someone and conduct a usability study, just leave it better for the next person (who could be yourself, if you forget). That has happened before.

I think you are misreading the parent comment here.

I'm a Jr. sysadmin at a medium sized software company. Whenever I document workflows for our users, I have two colleagues of mine who have no connection to IT work through them and add the small gotchas they asked me to the docs.

It saved me a whole bunch of headaches for when other users get enrolled in these workflows.

I've been doing some cal/QC functions recently after years not touching it. Since I last did it I've forgotten some of the knowledge that is just assumed. The answers to my questions are documented, but not in a places that is accessible from the production side and has lived as community knowledge in production. I've been making a list and updating the documents to fill in some gaps.

Unfortunately some of the production people aren't comfortable enough pushing for changes in the documentation so some of my job now is to ask what they've noted and get it added.

I'll go against the grain and say that fumbling is how you learn. The easier it is to get to the end of the tutorial, the less you learn in the process. If you learn math from a bad book, you have to organize your own notes, to untangle the mess. If it's laid out all neat and clear like a straight highway, you never wrestle it out with the concepts and you don't learn.

That's something I came to accept as well - deeper understanding will only come from challenge. Unfortunately, there isn't always the opportunity to let people fail, and that opportunity is definitely not in a set of reference docs.

> and that opportunity is definitely not in a set of reference docs.

Okay, but GP is talking about tutorials, which are a completely different form of documentation.

I like to always provide a docker image which can be used to execute whatever solution I'm developing. Most of the time the docker image isn't even used, but it's an important exercise because I'm forced to run my solution on a fresh system, so the resulting docs will invariably be more complete, and it also documents the dependencies in a way you can easily verify.

This is basically the user testing approach as described in "Don't make me think" by Steve Krug. You can use it to test usability of your applications as well.

Don't speak to them or help them at all?

Suppose they get stuck on the first step in a multistep procedure. Do you just let them keep flailing on that step for however long they are available, so all that you learn from that entire session is that the first step needs rewriting? Or do you end the test and let them go, again learning nothing beyond that the documentation for the first step sucks?

Wouldn't it be better at that point to help them on to the next step and then continue on having them test the rest of the steps?

I am stuck in an organization for some personal reasons.

The first thing I noticed when I joined was the culture of "Please ask when something is not clear". After was given a quick overview in person.

You guess: almost everything is unclear. A mess. Need to ask a lot. Task descriptions, purpose, reasons, whys, wheres, what does this comment mean, why are these things contradict each other, and so on, and so on.

And except asking KG, usually the answer is: ask XY. Or KG.

People always busy, always in rush, give a condensed answer raising the same amount of new questions that it answers.

When KG is out, productivity slows down.

And all this beyond the usual in a meeting, out with customer, on holiday, sick, the children is sick, held up in a traffic jam, car broke down, need to finish project P so schedule something for next week, and all those kinds of common things making the relevant person unavailable when "something is not clear".

And beyond the forgetting 4 things of the 15 new info given by the time we are finished with the converstaion. No written trail to look back at.

When 3 person paint a complete picture then all above happen three times in a row, or in a never ending loop.

Productivity suffers, quality suffers, I will leave as soon as I can.

Positive things? Probably that the expectations are low. And they pay well. And by now I am irreplecable in a local subset I was hacking together (I do not call it work or development), not even KG can help others there! I will leave on my own terms (as usual, unluckily).

I started a company to do exactly this a few years ago, and got to work with amazing companies testing their developer experience.

The problem is not the docs, it's Conway's law. One team designs the API, the other team designs the portal, and another team designs the SDK. The user has a holistic experience that cuts through each team.

That, and the docs are usually written first by the most technical person around, who has a hard time sharing the world view of a noob.

I think the JavaScript ecosystem did a great job at this. Take a look at the documentation of React/Vue/Svelte; it is fascinating how they make it so accessible, both for newcomers and experienced developers in the field.

In contrast, the Java ecosystem has been really bad at documentation in my experience. Most of it is just explanations of function signatures, without any words on how those functions work as a whole system. The situation is even worse on Android, where there are dozens of standard APIs to achieve the same functionality.

Those are references, not tutorial. They are there to refresh your memory. Usually you look for code examples or a guide for learning how those work (even AOSP apps if needed)

As far as I recall, many libraries in the Java ecosystem, as well as the Android API, don't have the official tutorials or guides you're referring to. The JavaDoc and the Android API reference are often the only officially available resources.

So no, Those aren't just there to refresh developer's memory. In many cases, they are the only resource for learning the system from scratch.

About JDK Java docs:

    > Those are references, not tutorial. 
This is a great phrase. I fully agree with your sentiment. To me, I never read Javadocs in HTML-only form. I always read them in an IDE, along with the library code in question. If anything is unclear from the Javadoc, then read the code (which immediately follows the Javadoc).

I also occasionally fall back to the source when the documentation isn't comprehensive enough.

But as library users, we're generally not supposed to have to learn the system from its source, aren't we?

> Usually you look for code examples or a guide for learning how those work

... which in practice means, particularly for stuff that recently changed, that you go to StackOverflow only to find out that the majority of posts are horribly outdated and don't even compile any more.

The other side are code examples that technically work and show, say, the syntax on how to use a programming language's or framework's shiny new feature... but manage to dumb the code example down so far that one has a very hard time wrapping around one's head on how to use this feature in a real world application.

I was reading The Art of Unix Programming (E. Raymond) and one of the advices was that every library should come with a program. So even it’s a todo list kind, I think it’s quite nice to have.

> Especially when using docs for critical tech I only use from time to time (where I forget lots of it).

An important point easy to lose sight of when writing when that knowledge isn't lost yet

I was talking to a friend who is a beta-tester for crochet patterns, the business owner sends out a pattern to a trusted group and the get feedback on the descriptions and the work and any things that would make it easier before they put it up for sale.

I do think a lot of developer tutorials and documentation don't take into consideration that many people might not have a common understanding of terminology especially if the reader is coming across this problem or process for the first time.

So doing the basics of product design? :D Sounds like a good approach! Sadly user tests or other forms of iterating are often overlooked.

The golden rule: Plan -> Act -> Test -> Repeat

Fully agree. Good docs are essential for scaling a team beyond the first few hires. I always make a point of filling in all the gaps I had to gather myself during my onboarding, and ask the next hire to do the same (and carry it to the next hire, and the next) this helps keep the docs up to date with the relevant knowledge since it’s always being filtered through the lens of a brand new computer and a dev with minimal context.

People here are talking about it as if its merely a problem of wrong target audience when the problem is a lot of docs are straight up lies. The example setup steps and configuration in the front page itself fails. That's what makes me wish I could shoot someone or something.

Thoroughly agree. Where I come from it's called "shoulder surfing". It is really important to not help.

> If not, note every single place they fail, address each one, and repeat with a new user.

Might not this loop be invoking Goodhart's Law?

What is "address each one": are we just changing that document, or are we (also) changing something in the system that the document is about?

If no newbie has any problem following the document, is that still a good document for non-newbies?

If no newbie has any problem with the system that the document is about, are there any downsides?

[deleted]

I absolutely love this approach. It is in the spirit of https://1x.engineer/ and it should be applauded.

> have someone with minimal expertise go through your docs with the goal of achieving the goal of the docs. Sit next to them or screenshare. Do not speak to them, certainly do not help, just watch. Watch them fumble. Watch them not know what to do

And if you have access to user experience researchers, go talk to them! They are experts in running this kind of scenario, and can help you avoid all the pitfalls that might bias your results

Totally. Something that I see a lot is software that tries to read a config file during startup that bails out (sometimes with no error message!) if the file doesn't exist. Or tries to write the config file into a directory that doesn't already exist.

I'll get things working locally first, but I always have to test it in docker/other fresh test env (Vagrant), just to be sure I haven't committed the same sin myself.

I wonder if we now have the tools to build unit tests for docs now; an LLM should be able to take on the persona of a beginner try to follow your doc. For bonus points use a dumber/older model that can’t have trained on your API.

Basically, ergonomic testing, but for your doc instead of your software.

I've written a lot of docs, and one big issue I saw play out over several years was watching the overall skill of the team members drop. They were told by their manager to use the docs, which they did, and then seemed unable to think outside the docs when needed. For tier 1 support roles, I think the docs were helpful to get them going, but it seemed like the docs acted as a crutch for most of the team, to never be able to grow in their role and move up to tier 2. I'm not sure how to solve for this problem.

I think that always depends entirely on the docs and how people are instructed to use them.

From a software engineer standpoint, we have a larger collection of docs for the internal platform we run. The docs for other engineers follow the diátaxis framework [0] for documentation. Its the best approach we've found so far and the overall questions and guidance my team needed to provide reduced by a significant margin while the PRs we know receive have increased in quality and quantity.

[0] https://diataxis.fr/

I think that you are interpreting this outcome as technology-wise negative. Instead, I will offer a commercial positive: If the docs that wrote are so great, then you can hire lower skill, cheaper support staff. Training is also cheaper (because of docs). If I was senior IT mgmt or biz mgmt: That is a win.

    > never be able to grow in their role and move up to tier 2.  I'm not sure how to solve for this problem.
I have a selfish answer. Who cares about staff that don't improve. Really. Read that twice. Leave them behind in the dust. I am always blown away when I meet someone in my career and they have been doing some shitty support role, and they have barely progressed (career-wise or tech-knowledge-wise). Who are these people? Everyday, they dig a hole, then a 4PM they fill the hole. Rinse and repeat! Someone who is smart enough to "figure it all out" and write docs should be promoted, or moved to another support team to repeat the same pattern.

In the past (20 years ago), those tier 1 roles were a great feeder for the organization. Because that role touched so much, it meant everyone had a lot of perspective on the organization as a whole, and thought about support and maintenance while building new things.

It’s easy to say who cares and hire from the outside, but that organizational context and care for support is lost. People build whatever and throw it over the fence, which makes everything worse, imo. Those people also tend not to stick around, so they have no skin in the game and it’s hard to develop culture as people rotate in and out frequently.

There are always some people who will never learn, and these people are cheaper, but there are other hidden costs as you seek to optimize for low-skill workers.

I always write my own notes when setting up to "fill in the blanks" of the guide, then I create PR with them.

We do this in game development .

Watch someone play the game for the first time. Don’t interfere. See if they can figure out how to play.

Play testing is the most important part of game development. Indies who struggle to come up with concepts are really sleeping on this. If you run play tests well enough your roadmap will almost write itself. Players will do and ask for things that you would never dream of.

I think an intense culture of playtesting is why valve software puts out games so rarely. Their new strategy seems to be to keep a title semi-secret for years while a small army plays it full time. If Deadlock makes it to market, it is almost certainly going to be an acceptable game to most who are even remotely interested in the genre.

are you interested in giving a talk/presentation about this

this works great, if, they speak out their thoughts verbally, in real time.

> just watch.

You need to be brutal with yourself for this, and understand you're chasing popularity, and not necessarily revenue.

It's good to be popular with your users, but if your users are not your customers...

> I've used FAANG docs that don't come close to passing the above criteria.

... FAANG is an excellent example of which; Because their documentation and code is so bad integrations always take longer than anyone can estimate, this actually discourages managers from considering a second integration.

That is to say it's not necessarily good business to "pass the above criteria" and I think it's important to remember that.

Ask the guinea pig (read: victim) also to think aloud.

LLMs have mostly eliminated the need for this. They are quite good at explaining things.

Correction: They are very good at writing seemingly-good explanations. The explanations may or may not be correct.

Correction: They are quite good for this: Easy beginner level stuff. For that specific thing, they are much MORE correct than they are wrong.

The status quo is a moving target. 6 months ago what you said would be fully correct. This is no longer the case, now you are only sometimes right and mostly wrong. It is getting better.

[flagged]

Don't like your tone. Please speak in a non offensive way or leave.

I was talking to a friend who is a beta-tester for crochet patterns, the business owner sends out a pattern to a trusted group and the get feedback on the descriptions and the work and any things that would make it easier before they put it up for sale.

We're quickly approaching the point where you can have an LLM do this, and if it passes "the doc passes", if not, time to edit.

I was gonna say this. Really good idea. Having an LLM go through the docs and try to implement something. Challenge would be to prevent it from using any prior knowledge or experience, depending on the docs target audience. Good prompt is essential.

Without AI, it was really hard to get to understand some docs. Today if you don't use AI for these situations shrugs

Most cases it is not that docs author forgot users wont have same toolchains. Simply do not bother reducing config files to share just source code. Indirectly pushing users to make use of same tools.

Hopefully in 20 years no one will be going to check the source code of anything, and programming is elevated even more.

50 years is crazy amount of time, to stay this primitive. Tech shouldn't just evolve for end users.