You can use a local LLM and you can ask it to use tools so it is faster.

"so it is faster" than what? A cloud hosted LLM? That's a pretty low bar. It's certainly not faster than jq.

There is hardware that is able to run jq but no a local AI model that's powerful enough to make the filtering reliable. Ex a raspberry pi

[deleted]