But they also squesed a 80% cut in O3 at some point, supposedly purely on inference or infra optimization