Cool.
While I think this is indeed impressive and has a specific use case (e.g. in the embedded sector), I'm not totally convinced that the quality is good enough to replace bigger models.
With fish-speech[1] and f5-tts[2] there are at least 2 open source models pushing the quality limits of offline text-to-speech. I tested F5-TTS with an old NVidia 1660 (6GB VRAM) and it worked ok-ish, so running it on a little more modern hardware will not cost you a fortune and produce MUCH higher quality with multi-language and zero-shot support.
For Android there is SherpaTTS[3], which plays pretty well with most TTS Applications.
1: https://github.com/fishaudio/fish-speech
We have released just a preview of the model. We hope to get the model much better in the future releases.
Fish Speech says its weights are for non-commercial use.
Also, what are the two's VRAM requirents? This model has 15 million parameters which might run on low-power, sub-$100 computers with up-to-date software. Your hardware was an out-of-date 6GB GPU.