Which was a day late and a dollar short, even at release. Those ANEs are only really good at inference, and even then you get faster results using your Apple Silicon GPU. They're slow, incomplete, not integrated into the GPU architecture (like Nvidia's) therefore killing any chance of Apple Silicon seeing serious AI server usage.
If you want to brag about Apple's AI hardware prowess, talk about MLX. The ANE was a pretty obvious mistake compared to Nvidia's approach and hundreds of businesses had their own, even before Apple made theirs.
These chips were designed to be SoC/SiPs for consumer hardware. It feels odd to judge them for their suitability in servers.
The cores and architecture was designed for smartphones, it got put in desktops and rackmount servers anyways. I can judge it for whatever the product is, it's not a untold mystery why Apple Silicon servers aren't flying off the shelves in the AI boom.
The ANE is not designed to be fast. Nothing on the M-series is designed to be fast. Anytime it's fast is a lucky accident.
It's designed to have optimal performance/power ratios. The GPU is faster but it uses more power.