> If you were say, writing a programming language, it would be nice to understand the fundamentals.
I work in programming languages research. Nobody uses Turing machines for anything. We do talk about decidability when it comes to certain things (type-checking, for instance), but that doesn't use Turing machines directly.
The lambda calculus comes up frequently, on the other hand, but it's used as a base from which we build minimal languages to demonstrate a novel concept. It's a nice theoretical framework around which to model a language abstractly, but essentially no major languages are actually implemented as extensions to the lambda calculus. (The notable exception is, of course, Haskell, but Haskell was explicitly designed in this way because it was intended as a sort of playground for programming languages research — see "A History of Haskell: Being Lazy with Class" from HOPL-III, 2007.) For example, Rust's region-based memory management descends from Cyclone, and iirc (it's been a few years) Cyclone was formalized as an extended lambda calculus, but nobody would suggest that becoming a contributor to the Rust language would require understanding the lambda calculus.
> I don't think Turing machines or Lambda calculus are even that far removed to call them completely theoretical. You can easily implement a few functions in lambda calculus that already resemble modern programming interfaces.
1. Whether your second statement is true is irrelevant to the first. The lambda calculus and Turing machines are theoretical devices. That's a statement of fact, not an indictment of utility. They're abstract machines used to reason about the theory of computation.
2. Your second statement is just false, actually. The pure lambda calculus doesn't have any kind of value other than anonymous functions; you need to get into, e.g., Church encoding to take them anywhere "useful", and that's pretty far removed from how actual language implementations work. If you go to the simply typed lambda calculus with some base types (integers and Booleans, for instance), okay, great, but you still are so abstract as to be sufficiently removed from actual implementations that the connections are not direct. Even Haskell, the most lambda calculus-y language out there, is actually based on System-F — which is like STLC but adds universal quantification over types, allowing for parametric polymorphism. And that's such a significant addition that most theorists are pretty quick to argue that Haskell is really a System F derivative rather than a direct lambda calculus derivative (though, of course, all computation can be derived in terms of the lambda calculus).
Considering we can implement and simulate a Turing machine and get actual calculations, how do you consider them to be completely abstract/theoretical?