This sounds like running an event loop per thread instead of 1 event loop with a backing thread pool. Or am I misunderstanding you?
It works great for small tasks but larger tasks block local events and you can get weird latency issues, that was the major tradeoff I ran into when I used it. Works great if your tasks are tiny though, not having the event loop handoff to the worker thread is a good throughput boost. But then we started having latency issues and we introduced larger tasks which would hang the local event loop from getting those events.
I think Scylladb works somewhat like this but does message passing to put certain data on certain threads so any thread can handle incoming events but it still moves the request to the pinned thread the data lives on. One thread can get overwhelmed if your data isn't well distributed.