Yep. Signals were literally the original async model on Unix. They were a userspace abstraction over hardware interrupts, much as they are today, but the abstraction didn't turn out to be as fruitful as it might have been, perhaps because it was too thin. (Signaling queueing, i.e. real-time signals, meant to make signals more useful for application events, never went mainstream.) Back in the 1970s and 1980s the big arguments regarding async were about interrupt-driven (aka signals) and readiness-driven (aka polling), and relatedly edge-triggered vs level-triggered events. BSD added the select syscall along with the sockets API and that's when the readiness-driven, level-triggered model began to dominate. Though, before kqueue and then epoll came along there were some attempts at scaling async I/O using the interrupt-driven model--a signal delivered along with the associated descriptor. I think there's a vestige of this still in Linux, SIGIO.
It's not always either/or, though. From the perspective of userspace APIs it's usually one or the other, but further down the stack one model might be implemented in terms of the other, sometimes with multiple transitions between the two, especially around software/hardware boundaries. Basically, it's turtles all the way down.
Similarly, the debates regarding cancellation, stack management, etc, still persist; the fundamental dilemmas haven't changed.