> For somebody who doesn’t do server stuff, is there a general idea of how many stages a typical server might be able to implement?

On the HTTP server from the article, what I understood is that those 2 you are seeing are the ones you have. Or maybe 3, if disposing of things is slow.

I'm not sure what I prefer. On one hand, there's some expensive coordination for passing those file descriptors around. On the other hand, having some separate code bother with creating and closing the connections make it easier to focus on the actual performance issues where they appear, and create opportunity to dispatch work smartly.

Of course, you can go all the way in and make a green threads server where every bit of IO puts the work back on the queue. But you would use a single queue then, and dispatch the code that works on it. So you get more branching, but less coordination.