If this is just running tests locally, it seems deeply flawed - e.g. the tests would work even if I forgot to commit new files.
OTOH if starting a new container, pulling my branch and then doing everything, it's definitely as good as running on remote CI because it's basically replicating the same behaviour. And it would likely still be much faster since:
* CI machines are underpowered compared to dev laptops/desktops. e.g. our CI builds run on a container with 4 vCPUs and 16GB RAM. In contrast my laptop has 16 cores and 48 GB RAM
* docker pull itself takes a relatively long time on CI. If I'm running it locally it would just use the cached layers
The tool does actually check whether you have any uncommitted changes in Git, and fails in that case. So you're protected from that particular mistake. You're not protected from mistakes related to running the tests or checking their results, though, because the tool has nothing to do with that.
The Docker CLI is supposed to support caching on GitHub Actions (https://docs.docker.com/build/cache/backends/gha/) but I suppose I haven't checked how fast it is in practice.
> CI machines are underpowered compared to dev laptops/desktops.
Ha, I wish. My company thinks 8gb of RAM on a 6 year old machine is plenty of power for the devs.
My browser tabs right now are using that much ram ;_;