If you are running local LLMs what is the hardware requirement in my machine? Don't see any mention of that.
Gemma 3n (the model used by this app) would run on any Apple Silicon device (even with 8GB RAM).
Yup, but you're automatically giving up a ton of RAM that could be better used for Slack.
Gemma 3n (the model used by this app) would run on any Apple Silicon device (even with 8GB RAM).
Yup, but you're automatically giving up a ton of RAM that could be better used for Slack.