I suspect you have forgotten this was a conversation about relevance in the day to day.

TM is the basic theory. It's a more advanced finite automata that is able to modify it's own instructions. The latter can be generated from a context free grammar. Both CFG and FA have a recursive nature that allows to build more advanced mechanism from those simple elements. And that's how you get CPUs (and more specialised ones like GPUs, NPUs, etc) and programming languages.

There are other computation mechanism like lambda calculus and Hoare logic. They all can be mapped to each other. So what we usually do is to invent a set of abstractions (an abstract machine) that can be mapped down to a TM-equivalent. Then denote those abstractions as primitives and then build a CFG that we can use to instruct the machine.

In the case of CPUs, the abstractions are the control unit, the registers, the combinational logic, the memory,... and the CFG is the instruction set architecture. On top of that we can find the abstract machine for the C programming language. On top of the latter you can find the abstract machine for programming language like Common Lisp (SBCL) and Python.

It's turtles all the way down. You just choose where you want to stop. Either at the Python level, the C level, the assembly level, the ISA level, or the theorical TM level.

A Turing machine isn't the only way to improve a finite automaton to make it universal. It doesn't have to be the bottom turtle.

I think it's pretty clear you just want to talk about the theory. You're simply ignoring what I'm saying.