(I’m an EU-based user of Apple products) I see your point. However, Apple already provides a translation API[0], a speech recognition API[1], and a Text2Speech API[2], so not a lot more is needed than the API you describe. Also note that, while I have not looked into that thoroughly, it seems the kind of API you are discussing shares many similarities with the features of the Apple Vision Pro SDK (real time computation introducing new constraints…)
I think this situation also shows a strong divide between two visions of Apple end-game (and I think both exist within the company): exposing those APIs makes the Apple ecosystem better as a whole, with its satellite accessories/app developers; while keeping them private gives them an edge as a hardware selling company. Personally, I prefer when Apple embraces its gatekeeper status.
[0]: https://developer.apple.com/documentation/translation/transl... [1]: https://developer.apple.com/documentation/speech [2]: https://developer.apple.com/documentation/avfoundation/speec...