Only 3) could scale but then those exam takers not using AI would fail unless they are geniuses in many areas. 1) and 2) can't be done when you have 50-70% of your course consisting of online students (Stanford mixes on-campus with CGOE external students who take the exams off-campus), who are important for your revenue. Proctoring won't work either as one could have two computers, one for the exam, one for the cheating (done for interviews all the time now).
Well realistically exam takers not using AI will fail in any sort of real world technical / professional / managerial occupation anyway. They might as well get used to it. Not being able to use LLMs effectively today is like the equivalent of not knowing how to use Windows 20 years ago.
Call me when AI can manage to write a regex that i would write, to parse a complex string, rather than some ridiculous mishmashing of nonascii chars that you need to talk to an ancient shaman to decrypt; or when AI can actually recognize contextual hints enough to know what the fuck im talking about, and not produce a writeup of things no longer relevant; or when it stops hallucinating and giving made up answers just to give an answer (which is far worse than saying i dont know, from a human, or ai)
AI has some uses, but the list of things it cant do is longer than the list of things it can.
Has AI advanced that far? How do managers use AI in their daily work? To generate emails? Most managers spend all day in meetings. How do they utilize AI for that? Inaccurately Compile the minutes of the meeting and summarize them?