Agreed. The beauty of programming is that you're creating a "mathematical artifact." You can always drill down and figure out exactly what is going on and what is going to happen with a given set of inputs. Now with things like concurrency that's not exactly true, but, I think the sentiment still holds.
The more practical question is though, does that matter? Maybe not.
> The more practical question is though, does that matter?
I think it matters quite a lot.
Specifically for knowledge preservation and education.
Yes, but : not in all cases really. There are plenty of throw away, one off, experimental, trial and error, low stakes, spammy, interactions with dumb-assery, manager replacing instances that are acceptable as being quickly created and forgotten black boxes of temporary barely working trash. (not that people will limit to the proper uses)
Yep, LLM’s still run on hardware with the same fundamental architecture we’ve had for years, sitting under a standard operating system. The software is written in the same languages we’ve been using for a long time. Unless there’s some seismic shift where all of these layers go away we’ll be maintaining these systems for a while.
In a way this is also a mathematical artifact — after all tokens are selected through beam searching or some random sampling of likely successor tokens.