Yeah I looked into the Apple Foundation models and was surprised at their limited scope. On reflection it made sense though. They’re giving you the small part of the LLM capability surface that (1) can run with good performance on all their hardware and (2) works reliably.

It’s not enough for a chat-first research agent, but it’s definitely enough to unlock features that rely on natural language understanding. Seems like a small thing compared to Claude/ChatGPT and the general hype, but still magic in its own context.