Losing money on o1-pro. That makes sense and also why they axed that entire class of models.

Every o1-pro and o1-preview inference was a normal inference times how many replica paths they made.