Why wouldn't something like this work?

1. Get students to work on a more complex than usual project (in relation to their previous peers). Let them use whatever they want and let them know that AI is fine.

2. Make them come in for a physical exam where they have questions about they why of decisions they had to take during the project.

And that's it? I believe that if you can a) produce a fully working project meeting all functional requirements, and b) argue about its design with expertise, you pass. Do it with AI or not.

Are we interested in supporting people who can design something and create it or just have students who must follow the whims of professors who are unhappy that their studies looked different?

A project doesn't quite work for my course, as we teaching different techniques and would like knowledge of each of them.

But yes we currently allow students to use AI provided their solution works and they can explain it. We just discourage to use AI to generate the full solution to each problem.

If I read your suggestion correctly, you're saying the exam is basically a board explaining their decision making around their code. That sounds great in theory but in practice it would be very hard to grade. Or at least, how could someone fail? If you let them use AI you can't really fault them for not understanding the code, can you? Unless you teach the course to 1. use AI and then 2. verify. And step 2 requires an understanding of coding and experience to recognize bad architecture. Which requires you to think through a problem without the AI telling you the answer.

Yep, you can fault them for not understanding it.

Exactly the same as in professional environments: you can use LLMs for your code but you've got to stand behind whatever you submit. You can of course use something like cursor and let it go free, not understanding a thing of the result, or you can step-by-step do changes with AI and try to understand the why.

I believe if teachers relaxed their emotions a bit and adapted their grading system (while also increasing the expected learning outcomes), we would see students who are trained to understand the pitfalls of LLMs and how to maximise getting the most out of them.

If you grade on pass/fail it’s easy to grade. Not every course uses letter grades…

If you let people use AI they are still accountable for the code written under their name. If they can’t look at the code and explain what it’s doing, that’s not demonstrating understanding.