I'm still betting on a future AppleTV model being a full-on local LLM machine.
This way they could offload as much of the "LLM" work on a device that lives in the home, all family linked phones and devices could use it for local inference.
It's way overpowered as is anyway, why not use it for something useful.