If the only principle you look at is "does this compute?" then they're not that different. Otherwise they're about as far apart as you can get.
A Turing machine (at least one that isn't designed in some super wacky way to prove a point) has a few bits of internal state and no random access memory. If you want RAM you have to build a virtual machine on top of the Turing machine. If you try to program it directly it's going to be a byzantine nightmare.
Being restricted to tape, and in particular a single tape, makes Turing machines absolutely awful for teaching programming or how a CPU implements an algorithm. Instructions, data, and status all have to fit in the same place at the same time.
It's so bad that Brainfuck is an order of magnitude better, because it at least has separate instruction and data tapes.
from your other comment > But as far as understanding computer science, computational theory, etc certainly you'd want to study Turing machines and lambda calculus. If you were say, writing a programming language, it would be nice to understand the fundamentals.
Turing machines are not fundamental. They're just one way to achieve computation that is close to minimal. But they're not completely minimal, and I strongly doubt studying them is going to help you make a programming language. At best it'll help you figure out how to turn something that was never meant to compute into a very very bad computer.
Turing machine in essence is a finite state machine + memory (the tape) + some basic instructions for reading and writing to the memory.
Its a very simple, rudimentary computer, not some completely abstract mathematical object, which was what I was responding to.
With universal turing machines, its not difficult to start writing composable functions, like an assembly instruction set, adders, multipliers, etc.
TMs certainly arent fundemental, but when you look at TMs, lambda calculus and understand why they are equivalent, wouldnt you say you gain an understanding of what is fundamental? Certainly constructions like for loops, the stack etc are not fundamental, so youd want to go deeper in your study of languages
And a barebones traditional CPU is a finite state machine plus random access memory. It teaches you mostly the same things about how you put together simple components into universal computation, while having programs that are far easier to comprehend.
And then for another perspective on computation, lambda calculus is very different and can broaden your thoughts. Then you could look at Turing machines and get some value, but niche value at that point. I wouldn't call it important if you already understand the very low level, and you should not use it as the model for teaching the very low level.
>while having programs that are far easier to comprehend.
If you want to learn the fundamentals of something, should you not wish to you know, think about the fundamentals?
My argument is that FSM+tape and FSM+RAM are at the same level of "fundamental", but one is easier to understand so it should be the thing you teach with. Being more obtuse is not better.
One realization with TM is that programs and data are essentially the same and separation is usually imposed. When you think about your program as data, it’s hard to not notice patterns and you start to yearn for metaprogramming to more expressively express those.