My research group at Stanford has been alpha testing Tinker, it's both very useful and also really technically impressive in my opinion. It's a unified framework for post-training models and it abstracts almost all of the complexity of managing these jobs across resources. That it manages to do this while also allowing a lot of algorithmic flexibility is pretty unique.

Silly question: how is it different from, say, hf's transformers and similar libraries and APIs?

with hf transformers, you still need to manage GPUs