Turing machine in essence is a finite state machine + memory (the tape) + some basic instructions for reading and writing to the memory.
Its a very simple, rudimentary computer, not some completely abstract mathematical object, which was what I was responding to.
With universal turing machines, its not difficult to start writing composable functions, like an assembly instruction set, adders, multipliers, etc.
TMs certainly arent fundemental, but when you look at TMs, lambda calculus and understand why they are equivalent, wouldnt you say you gain an understanding of what is fundamental? Certainly constructions like for loops, the stack etc are not fundamental, so youd want to go deeper in your study of languages
And a barebones traditional CPU is a finite state machine plus random access memory. It teaches you mostly the same things about how you put together simple components into universal computation, while having programs that are far easier to comprehend.
And then for another perspective on computation, lambda calculus is very different and can broaden your thoughts. Then you could look at Turing machines and get some value, but niche value at that point. I wouldn't call it important if you already understand the very low level, and you should not use it as the model for teaching the very low level.
>while having programs that are far easier to comprehend.
If you want to learn the fundamentals of something, should you not wish to you know, think about the fundamentals?
My argument is that FSM+tape and FSM+RAM are at the same level of "fundamental", but one is easier to understand so it should be the thing you teach with. Being more obtuse is not better.
One realization with TM is that programs and data are essentially the same and separation is usually imposed. When you think about your program as data, it’s hard to not notice patterns and you start to yearn for metaprogramming to more expressively express those.