I wouldn't say it abstracts the locking mechanisms away - if you need synchronization in your app, it's probably best to leave how that's achieved up to the user - what it does is make it possible to contain your business logic end-to-end in a single application/codebase without obfuscating it with distribution boundaries (e.g., calls out to other REST APIs or message queues). There are also still worker nodes to manager, BUT the architecture is much simpler in the sense that there are only workers to deal with - no control plane, scheduler, or other services involved.

Regarding failures - Wool workers are simple gRPC services under the hood, and connections are long-lived HTTP2 connections that persist for the life of the request. Worker-side failures simply manifest as Python exceptions on the client side, with the added nicety of preserving the FULL stack trace across worker boundaries (achieved with tbpickle). A core tenet of Wool is that it makes no assumptions about your workload - I leave it up to you to write a try-catch block and handle exceptions in a manner appropriate to your use case. The goal is to keep Wool as unopinionated about this sort of thing as possible.

I'm not sure about your specific needs, but I'm considering adding a simple CLI-based worker management tool for users that don't want or need a full service orchestrator like Kubernetes in their stack.