This approach is just cheap theater. It doesn't actually stop AI, it just adds a step to the process. Any student can snap a photo, OCR the text and feed it into an LLM in seconds. All this policy accomplishes is wasting paper and forcing students to engage in digital hoop-jumping.

It’s not theater. It introduces friction into the process. And when there is friction in both choices (read the paper, or take a photo and upload the picture), you’ll get more people reading the physical paper copy. If students want to jump through hoops, they will, but it will require an active choice.

At this point auto AI summaries are so prevalent that it is the passive default. By shifting it to require an active choice, you’ve make it more likely for students to choose to do the work.

That friction is trivial. You are comparing the effort of snapping a photo against the effort of actually reading and analyzing a text. If anyone chooses to read the paper, it's because they actually want to read it, not because using AI was too much hassle.

You can certainly make it harder to cheat. AIs will inevitably generate summaries that are very similarly written and formatted -- content, context, and sequence -- making it easy for a prof (and their AI) to detect the presence of AI use, especially if students are also quizzed to validate that they have knowledge of their own summary.

Alternately, the prof can require that students write out notes, in longhand, as they read, and require that a photocopy of those notes be submitted, along with a handwritten outline / rough draft, to validate the essays that follow.

I think it's inevitable that "show your work" soon will become the mantra of not just the math, hard science, and engineering courses.

Any AI app worth its salt allows you to upload a photo of something and it processes it flawlessly in the same amount of time. This is absolutely worthless teather.

It’s not the time that’s the friction. It’s the choice. The student has to actively take the picture and upload it. It’s a choice. It takes more effort than reading the autogenerated summary that Google Drive or Copilot helpfully made for the digital PDF of the reading they replaced.

It’s not much more effort. The level of friction is minimal. But we’re talking about the activation energy of students (in an undergrad English class, likely teenagers). It doesn’t take much to swing the percentage of students who do the reading.

Are you really comparing the energy necessary to read something to taking a photo and having an ai read it for you. You are not comparing zero energy to some energy, you are comparing a whole lot of energy to some energy.

The quotas for summarising text and parsing images and then summarising text aren't the same. As you surely know.

Who’s paying for that? Certainly not the users (yet).

The taxpayers will, when those companies will need to be bailed out.

Students tend to be fairly lazy, so this may simply mean another x% of the class reads the material rather than scanning in the 60 pages of reading for the assignment.

You don't need to Ocr. Llms can respond directly to the scanned image. They are better than most Ocr programs.

Indeed the token cost of image inputs are lower because you have more fine grained control of the latent token space

You fundamentally misunderstand the value of friction. The digital hoop-jumping, as you call it, is a very very useful signal for motivation.