The code is only a (very important) part of this type of program. The samples are critical and (for the time being anyway) can't be generated by AI.
Especially important if you want orchestral instruments that sound realistic. Just think of the many ways that a single note can be played by a professional player and multiply that by the range of the instrument.
Edited to add: not orchestral instruments, and also not samples, but this gives an idea of the complexities of capturing the characteristics of an amplifier so that it can be modeled faithfully: https://neuraldsp.com/quad-cortex-updates/introducing-tina (I'm not related and I'm actually a Line6 customer, but I saw this at work in an interview by Rick Beato and though it was super interesting)
Is this the vid? https://www.youtube.com/watch?v=9YL8pwF7Mnc
Rick Beato travels to NeuralDSP in Finland.
Agree 100%. The multivariate ways a note can be expressed is almost unlimited. For example, I first heard Bach's Cello Suite #1 played by some random cellist. Fell in love with it and listened to it endlessly. Then I heard Yo-Yo Ma play it and it was a completely different piece.
IIRC the samples in this program were actual performances, so I'm curious how they captured all the variations...