Absolutely, when we're talking about infrastructure versus model development (RL/fine tuning, let alone pre-training).