If you were teaching theoretical CS, I think you'd certainly want to use a lower level abstraction than RAM. (And RAM really is not unlike a Turing machine tape, except its chunked in bytes and addressable, but in principle plays the same role and has the same operations that can be performed on it which is, move to the left, move to the right, read at the current location, write to the current location. And the modern CPU instruction set isn't really all that different in principle either as if you look at it, its mostly using higher level instructions for accomplishing the aforementioned operations. Eg. Move x, versus 5 right or left operations. But you can certainly write a Turing Machines which implement such an instruction set, and have an easier programmable Turing Machine).

Now TMs certainly are not the only more fundamentals models of computing, but they are certainly interesting nonetheless, and have an for the influence for the Von Neuman Architecture and modern computers.

If I were studying theoretical CS, Id want to learn about TMs, lamba calculus, FSMs, queue automatica, all of it. If you just told me about how modern computers work, Id be left wondering how anyone even conceived this idea.

And as I said in my earlier comment, when you get down to the core of it, what's really interesting to me, and this is readily apparent if you reading Turing's 1936 paper, is that Turing very much came up with the idea thinking about how HE does computation on graph paper. That to me is such an essential fact I would not have want to missed, and I wouldn't have known it lest I actually read the paper myself.