Where did you get your numbers?

M1 core is 2.28mm2 and Zen3 core is 4.05mm2 if you count all the core-specific stuff (and 3.09mm2 even if you exclude the power regulation and the L2 cache that only this core can use). That is 27-45% larger for generally worse performance (and all-around worse performance if per-core power is constrained to something realistic). I'd also note that Oryon and C1-Ultra seem to be much more area efficient than more recent Apple cores.

We're at a point where Apple's E-cores are getting close to Zen or Cove cores in IPC while using just 0.6w at 2.6GHz.

If you count EVERYTHING outside of the core (power, matrix co-processor, last-level-cache, share logic, etc), and average out, we get 15.555mm2 / 4 = 3.89mm2 for M1.

If we do the same for Zen3 (excluding tests and infinity fabric), we have 67.85mm2 / 8 = 8.49mm2.

M1 has 12mb of last-level cache (coherent) or 3mb per core while Zen3 has 4mb of coherent L1 and 32mb of victim cache (used to buffer IO/RAM read/write and to hold cache misses in hopes that they can be used eventually). You can analyze this how you would like, but M1's cache design is more efficient and gives a higher hit rate despite being smaller. Chips and Cheese has an interesting writeup of this as applied to Golden Cove.

https://semianalysis.com/2022/06/10/apple-m2-die-shot-and-ar...

https://x.com/Locuza_/status/1538696558920736769

https://semianalysis.com/2022/06/17/amd-to-infinity-and-beyo...

https://chipsandcheese.com/p/going-armchair-quarterback-on-g...

What point are you trying to argue with everyone? You seem to be quibbling over everything without stating an actual POV.

They called me out as being wrong then cited incorrect data to support their claim. I responded with the real numbers and sources to back them up. What else should be done?