I'm not sure what were arguing to be honest. You definitely don't need to understand Turing machines to understand how computers work, and certainly not how to do programming.
But as far as understanding computer science, computational theory, etc certainly you'd want to study Turing machines and lambda calculus. If you were say, writing a programming language, it would be nice to understand the fundamentals.
I mean, I don't think Turing machines or Lambda calculus are even that far removed to call them completely theoretical. You can easily implement a few functions in lambda calculus that already resemble modern programming interfaces.
Suppose I were to teach theoretical CS using only RAM models of computation, with no reference to Turing Machine tapes. Would there be any downside to doing this, pedagogically? (Other than, of course, the backward compatibility concern of students being able to engage with existing literature, which is the main reason this isn't done I think)
If you were teaching theoretical CS, I think you'd certainly want to use a lower level abstraction than RAM. (And RAM really is not unlike a Turing machine tape, except its chunked in bytes and addressable, but in principle plays the same role and has the same operations that can be performed on it which is, move to the left, move to the right, read at the current location, write to the current location. And the modern CPU instruction set isn't really all that different in principle either as if you look at it, its mostly using higher level instructions for accomplishing the aforementioned operations. Eg. Move x, versus 5 right or left operations. But you can certainly write a Turing Machines which implement such an instruction set, and have an easier programmable Turing Machine).
Now TMs certainly are not the only more fundamentals models of computing, but they are certainly interesting nonetheless, and have an for the influence for the Von Neuman Architecture and modern computers.
If I were studying theoretical CS, Id want to learn about TMs, lamba calculus, FSMs, queue automatica, all of it. If you just told me about how modern computers work, Id be left wondering how anyone even conceived this idea.
And as I said in my earlier comment, when you get down to the core of it, what's really interesting to me, and this is readily apparent if you reading Turing's 1936 paper, is that Turing very much came up with the idea thinking about how HE does computation on graph paper. That to me is such an essential fact I would not have want to missed, and I wouldn't have known it lest I actually read the paper myself.
You could do this with the C abstract machine. because it’s Turing complete. But we go with TM because they’re the most basic. Anything else is an abstraction. So you can stop at any level you like.
> You could do this with the C abstract machine.
I don't think there is a "C abstract machine". I think there are some, like, minimal versions of C (there's probably a Featherweight C out there or something), but the C specification is not given in terms of an abstract machine, and any abstraction of the machinery of C will necessarily lose important implementation details (e.g., undefined or unspecified behaviors, for instance).
In what way is a TM the most basic? If you mean in terms of simplicity I think a queue automatic is simpler to explain.
The usual theory of computation textbooks usually goes on to explain finite automata first, then context free grammar and finally the Turing machine. Why? Because the execution part of a TM is a finite automata, while the instructions can use the generative process of CFG. And then you got the whole computing world. Just from these principles (the other chapters are mostly about what's possible and what's not and exploration of other properties)
The nice thing about finite automata is that they can be composed together. That leads to a recursive pattern. This leads to the creation of higher abstractions, leading to general purposes CPU and special processors (gpu, npu, cryptographic chipset, hardware encoding,...). The same things applied to CFGs lead to the creation of programming languages.
> If you were say, writing a programming language, it would be nice to understand the fundamentals.
I work in programming languages research. Nobody uses Turing machines for anything. We do talk about decidability when it comes to certain things (type-checking, for instance), but that doesn't use Turing machines directly.
The lambda calculus comes up frequently, on the other hand, but it's used as a base from which we build minimal languages to demonstrate a novel concept. It's a nice theoretical framework around which to model a language abstractly, but essentially no major languages are actually implemented as extensions to the lambda calculus. (The notable exception is, of course, Haskell, but Haskell was explicitly designed in this way because it was intended as a sort of playground for programming languages research — see "A History of Haskell: Being Lazy with Class" from HOPL-III, 2007.) For example, Rust's region-based memory management descends from Cyclone, and iirc (it's been a few years) Cyclone was formalized as an extended lambda calculus, but nobody would suggest that becoming a contributor to the Rust language would require understanding the lambda calculus.
> I don't think Turing machines or Lambda calculus are even that far removed to call them completely theoretical. You can easily implement a few functions in lambda calculus that already resemble modern programming interfaces.
1. Whether your second statement is true is irrelevant to the first. The lambda calculus and Turing machines are theoretical devices. That's a statement of fact, not an indictment of utility. They're abstract machines used to reason about the theory of computation.
2. Your second statement is just false, actually. The pure lambda calculus doesn't have any kind of value other than anonymous functions; you need to get into, e.g., Church encoding to take them anywhere "useful", and that's pretty far removed from how actual language implementations work. If you go to the simply typed lambda calculus with some base types (integers and Booleans, for instance), okay, great, but you still are so abstract as to be sufficiently removed from actual implementations that the connections are not direct. Even Haskell, the most lambda calculus-y language out there, is actually based on System-F — which is like STLC but adds universal quantification over types, allowing for parametric polymorphism. And that's such a significant addition that most theorists are pretty quick to argue that Haskell is really a System F derivative rather than a direct lambda calculus derivative (though, of course, all computation can be derived in terms of the lambda calculus).
Considering we can implement and simulate a Turing machine and get actual calculations, how do you consider them to be completely abstract/theoretical?