The thing is that speed in building stuff doesn't really help. What you want is a model and a simulation framework. In the traditional way, you usually start with a simple model and a simple framework. Then when you add a new parameter, you adapt the framework and once you've found a balanced set of inputs, you think which new parameter you want to add. This iteration leads to a great understanding of the model and the behavior of the system that implements it.

LLM usage is usually a build of the system for the full parameter set. The speed increase is countered by the fact that there's no understanding of the system and the simulation space is so large that the user don't really bother to explore it. There's been a lot of talks about having a full test suite for simulation, but they are discrete and only prove specific points in the input space (There's a lot of curves that can pass through a finite set of points)