I realize it does not address the OP security concerns, but I'm having success running rocm containers[0] on alpine linux specifically for llama.cpp. I also got vLLM to run in a rocm container, but I didn't have time to to diagnose perf problems, and llama.cpp is working well for my needs.
FWIW, Alpine now has native packages for llama.cpp (using Vulkan).
nice! will check it out
edit: and thanks for the packaging work!