What is the alternative?

Or are you just for creating classes of people that just can't be critiqued in any circumstance?

This kind of sounds like, 'Wont anyone think of the grifters?!'

I know every area is different but the "grifters" in the area of Computational Linguistics (the ACL) are "any volunteer[1] whose paper has been accepted at least once", meaning anyone from PhD students to professors and industry researchers.

Not all academia is Elsevier.

[1] This policy has been altered recently, though, and now submitting a paper comes with reviewing duties.

We are struggling badly with review quality in natural language processing though. Most likely due to the unprecedented expansion of the field over the last ten or so years. Reviewers are suffering with review loads far exceeding what one reasonably can manage mentally (used to be two to three papers per reviewer and now five would be considered rather generous). Authors and area chairs suffer from worse quality reviews due to reviewer inexperience and overload, not to mention how good reviewer/author correspondence with author and area chair comments frequently being ignored by the reviewers. To me, the last holdout of good peer review in the field is Transactions of the Association for Computational Linguistics (TACL), but there the acceptance bar is sky high compared to ACL Rolling Review (ARR) for better or worse.

The ACL leadership and senior members of the field are very much aware of this and are trying their best (ARR being an attempt to improve the situation, but I am unsure how much better it really is compared to the old system of conference reviewing now that we are a few years in). But there appear to be no easy fixes for a complicated, distributed system such as peer review. Every discussion I have with said leadership and other senior members always ends with us agreeing on the problems and likewise agreeing that despite considerable mental effort we are failing to come up with solutions.

Returning to the main topic. Nature is worthy of praise for making their peer review transparent and I say this as a massive Nature critic. It is a move I loved seeing from NeurIPS (then NIPS) and ICLR over a decade ago, as it helps younger researchers see what good (and bad) communication looks like and that even papers they know now are greatly appreciated received a fair amount of criticism (sometimes unwarranted). I have argued for ACL to introduce the same thing for nearly a decade at this point, but we still do not and I have never heard a solid argument as to why not (best argument was the technological effort, but OpenReview, with all its flaws, makes this even easier than with Softconf; not that it would have been that hard with Softconf either).