that's not totally accurate imo. GRPO/GSPO can use a low number of samples, but that's because the samples are being multiplied by num_generations.

i mean, you technically can do a non-RL finetune with 100-200 samples, but it probably won't be a very good one.