Agreed, this is a common division of labor and simplifies things. It's not entirely clear in the postmortem but I speculate that the conflation of duties (i.e. the enactor also being responsible for janitor duty of stale plans) might have been a contributing factor.

The Oxide and Friends folks covered an update system they built that is similarly split and they cite a number of the same benefits as you: https://oxide-and-friends.transistor.fm/episodes/systems-sof...

I would divide these as functions inside a monolithic executable. At most, emit the plan to a file on disk as a “—whatif” optional path.

Distributed systems with files as a communication medium are much more complex than programmers think with far more failure modes than they can imagine.

Like… this one, that took out a cloud for hours!

Doing it inside a single binary gets rid of some of the nice observability features you get "for free" by breaking it up and could complicate things quite a bit (more code paths, flags for running it in "don't make a plan use the last plan mode", flags for "use this human generated plan mode"). Very few things are a free lunch but I've used this pattern numerous times and quite like it. I ran a system that used a MIP model to do capacity planning and separating planning from executing a plan was very useful for us.

I think the communications piece depends on what other systems you have around you to build on, its unlikely this planner/executor is completely freestanding. Some companies have large distributed filesystems with well known/tested semantics, schedulers that launch jobs when files appear, they might have ~free access to a database with strict serializability where they can store a serialized version of the plan, etc.