I went down this rabbit hole before, you have to ignore all the recommended approaches. The real solution is to have a build server with a global Docker install and a script to prune cache when the disk usage goes above a certain percentage. Cache is local and instant. Pushing and pulling cache images is an insane solution.

What you are describing is basically remote buildkitd. That allows all of your docker builds to share a big cache. The cache-to/cache-from approach is of limited usefulness.