Apple has (entirely on-device) OCR, computational photography, image segmentation for creating stickers, image classification and person/pet recognition, voice-to-text. These were added and useful before "AI" became a buzzword dujour.

If you're only talking language models, Apple has on-device language models available to developers and end-users via Shortcuts, and image generation for emojis. They just don't advertise most of their neural network models as "AI".