^^^
Think about it like this. A processor architecture like ARM, or Intel x86, or Itanium, or Apple m2, or anything of the sort, is basically a guiding document on how a processor should interpret information.
The ARM core architecture specifies commands, data movements, functions, registers, etc. Basically it’s a giant “how to do math for
dummiesrocks” book. There’s many different ways to make rocks think.The spec does NOT specify how the silicon has to implement each of those commands and data movements. Thats how Intel and AMD can build CPU’s with vastly different internal designs, that lead to a lot of different performance situations, but both still retain compatibility with the same sets of x86 software and operating systems.
Apple and Qualcomm have applied a ton of optimization R&D to the ARM reference designs to improve their performance in real silicon. The ARM reference chips exist more as a “proof of functionality” design that is meant to be extremely stable and used as, well, a reference of what the chip is actually supposed to do.
Is this the math rock I keep hearing about?
This article is a couple of years old but goes into some of the factors involved. It specifically covers mobile SOCs but they’re still ARM.
That said, I’m no hardware expert so others may be able to explain in more detail.
From the article from the other reply:
Apple had a head start: experience matters when designing chips.
Apple can ship a more expensive chip as they do not sell them directly.
By experience you mean poaching a bunch of Intel engineers as before they used Intel they were using PowerPC by Motorola. Not really making their own stuff until quite recently.
I mean experience with ARM64 CPU’s… Apple was a couple of years ahead. Read the article from the other reply.



