Suppose I were to teach theoretical CS using only RAM models of computation, with no reference to Turing Machine tapes. Would there be any downside to doing this, pedagogically? (Other than, of course, the backward compatibility concern of students being able to engage with existing literature, which is the main reason this isn't done I think)
If you were teaching theoretical CS, I think you'd certainly want to use a lower level abstraction than RAM. (And RAM really is not unlike a Turing machine tape, except its chunked in bytes and addressable, but in principle plays the same role and has the same operations that can be performed on it which is, move to the left, move to the right, read at the current location, write to the current location. And the modern CPU instruction set isn't really all that different in principle either as if you look at it, its mostly using higher level instructions for accomplishing the aforementioned operations. Eg. Move x, versus 5 right or left operations. But you can certainly write a Turing Machines which implement such an instruction set, and have an easier programmable Turing Machine).
Now TMs certainly are not the only more fundamentals models of computing, but they are certainly interesting nonetheless, and have an for the influence for the Von Neuman Architecture and modern computers.
If I were studying theoretical CS, Id want to learn about TMs, lamba calculus, FSMs, queue automatica, all of it. If you just told me about how modern computers work, Id be left wondering how anyone even conceived this idea.
And as I said in my earlier comment, when you get down to the core of it, what's really interesting to me, and this is readily apparent if you reading Turing's 1936 paper, is that Turing very much came up with the idea thinking about how HE does computation on graph paper. That to me is such an essential fact I would not have want to missed, and I wouldn't have known it lest I actually read the paper myself.
You could do this with the C abstract machine. because it’s Turing complete. But we go with TM because they’re the most basic. Anything else is an abstraction. So you can stop at any level you like.
> You could do this with the C abstract machine.
I don't think there is a "C abstract machine". I think there are some, like, minimal versions of C (there's probably a Featherweight C out there or something), but the C specification is not given in terms of an abstract machine, and any abstraction of the machinery of C will necessarily lose important implementation details (e.g., undefined or unspecified behaviors, for instance).
In what way is a TM the most basic? If you mean in terms of simplicity I think a queue automatic is simpler to explain.
The usual theory of computation textbooks usually goes on to explain finite automata first, then context free grammar and finally the Turing machine. Why? Because the execution part of a TM is a finite automata, while the instructions can use the generative process of CFG. And then you got the whole computing world. Just from these principles (the other chapters are mostly about what's possible and what's not and exploration of other properties)
The nice thing about finite automata is that they can be composed together. That leads to a recursive pattern. This leads to the creation of higher abstractions, leading to general purposes CPU and special processors (gpu, npu, cryptographic chipset, hardware encoding,...). The same things applied to CFGs lead to the creation of programming languages.