I suppose it also depends on the specific LLM; the output of a free/low-cost model will likely be very different from a $200/month o1-pro.