Lots of confusion between two things in this thread: 1) is this a good idea? 2) is this a good implementation of the idea?
Whether this is a good idea has been discussed to death already, but assuming you want this (which most people won't, the readme says as much), is this a good implementation of the idea? Yeah, it is.
Requiring a CI pass to merge, and a tool to add the CI pass after some unspecified process, seems like a neat, minimal implementation that still prompts the author enough to prevent most accidental misses. Is it complete? Of course not, but checklists don't need to be complete to be useful. Atul Gawande's book "The Checklist Manifesto" talks about this a bit, just the act of asking is often enough to break us out of habits and get us thinking, and will often turn up more issues than are on the checklist itself.
At Google we have a ton of tooling that amounts to light automation around self-certification. Many checks on PRs (CLs) only require you to say "yes I've done the thing", because that's often sufficient.
Surely if this is about creating a step in a checklist, all you need is a box to tick in the PR template, and that would be an even simpler version of this, requiring far fewer moving parts and being easier to use.
I think part of the criticism of (1) here comes from the complexity of the solution, which makes it feel like it should be competing with a more fully-fledged CI solution. But for a tool where the goal is really just to let the developers assert that they've run some tests, it's surely a lot more complicated than it needs to be, no?
PR templates are optional, many tools bypass them, my previous company tried using them and couldn't make processes stick with them because people used various tools to create PRs.
I'd argue that depending on PR templates, vs depending on devs having the signoff tool installed, is a pretty similar level of moving parts.
However perhaps more important than moving parts is failure modes. The failure mode of this tool is that your PR is blocked from merging until you run the tool. The failure mode of the PR template is that you never realise you missed the checkbox.
> for a tool where the goal is really just to let the developers assert that they've run some tests, it's surely a lot more complicated than it needs to be, no?
I think the point is to have the right amount of friction. Too little, as mentioned above, and you don't realise when you've lost the protection from the process. To me this is a good solution exactly because it gives you a little bit of friction (but not much and cheap).
Solutions with less friction include: just write in the PR description that you ran the tests, or just know that you ran the tests but don't write it down anywhere. I'd suggest that the safety nets provided by these options are less useful because they have less friction.
Is this really more friction? I can very easily imagine creating an alias that runs `git push && gh signoff` for the cases where I know the tests have passed, and I can equally easily imagine running that alias and forgetting that I've just committed a change and forgotten to run the tests. If anything, opening a browser and marking a checkbox there feels like more friction to me.
I could imagine there being more friction if the tool somehow checked that a test suite had been run on the current commit, but that doesn't seem to be the case. It's just a more convenient way of checking a checkbox.
The biggest advantage I can see with this is that the status check gets reset with every push. That's harder to do with a regular checkbox, but I can imagine there's still a way, and I can imagine that way would probably still be simpler than this.
What would constitute a bad implementation of the idea?
(Also, I don't see anything in the readme that says most people shouldn't use it.)
> Remote CI runners are fantastic for repeatable builds, comprehensive test suites, and parallelized execution. But many apps don't need all that. Maybe yours doesn't either.
My reading of that is that if you need repeatable builds, comprehensive test suites, and/or parallelised execution, then this is not the tool for you.
I think a checkbox, suggested in another comment is a worse implementation.