Hmmm... the chances for that are pretty slim I'm afraid. "Apple Silicon" is not a new system, it's just one of the large incumbents switching to another architecture (which is also not a first, this now being their fourth architecture, after 680x0, PowerPC and x86). In the desktop/notebook market, Wintel and Apple are firmly entrenched, with only ChromeOS and Linux challenging them - plus a few less significant OSes (FreeBSD, ReactOS anyone?). For mobile devices, we had a bit of a "Cambrian explosion", unfortunately followed by a very quick extinction, which left us with another duopoly. Here also there are free alternatives which however have very marginal market share.
As for actual CPU architectures, there are only two that really matter at the moment: x86/AMD64 and ARM. It's of course very cool that ARM has proved itself flexible enough to be used from (almost) the smallest embedded devices to supercomputers (not to mention Apple M1), but there's not that much diversity as there was in the 80s either...
Not only is it an incumbent switching to another architecture; it's an incumbent switching to another incumbent architecture. ARM is older than PowerPC and almost as old as the Macintosh itself; it came out in 1985.
Where the category "fish" isn't a clade - it's possible to evolve to no longer be a fish - it's more comparable to a specific generation of ARM chips, like perhaps ARM32, than it is to the ARM line in general. It would be weird to say "64-bit ARMv5" in the same way that it would be weird to say "lactating fish". But it is not weird to say "64-bit ARM" for the same reason it isn't weird to say "lactating euteleostome."
I gather that it's true that ARM hasn't been as good about backwards compatibility as some of its competitors, but was ARMv8 really so much of a jump from ARMv7 that one can't count it as part of the same line of processors anymore?
They weren't horrible either, AArch64 is incompatible with AArch32 but you can still implement both on the same chip with shared internals.
AMD didn't have to extend x86 the way they did, but without buy in from intel there was no way forward unless they went the route they did. Because unless both had agreed to shift to UEFI at the same time and agreed on an ISA it wasn't going to happen. This is why even a modern x86-64 processor has to boot up in real mode... because there was no guarantee that the x64 extensions were going to take off, so AMD had to maintain that strict compatibility to be competitive.
AArch64 had no prohibition, because there is no universal boot protocol for ARM. Insofar as the UEFI or loader sets the CPU in a state the OS can use then it's fine. The fact that there is one IP holder helped as well.
That said could AMD make a x86-64 processor without real mode or compatibility mode support? Yes they can. In fact I would hope that the processors they ship to console manufacturers fit that bill. There is a lot they could strip out if they only intend to support x86-64.
Short answer is yes. Just one significant example all instructions 32 bit long and no Thumb.
If you read Patterson and Hennessy (Arm edition) there is a slightly wistful throwaway comment I think that Aarch64 has more in common with their vision of MIPS than with the original Arm approach.
Elsewhere you've commented that it's more similar to x86 -> x64 than x86 -> Itanium - which may be true but Itanium was a huge change. However, Aarch64 is philosphically different to 32 bit Arm so it's not really like the x86 -> x64 at all which was basically about extending a 32 bit architecture to be 64 bit.
There's a sort of category problem underlying what you're saying though, perhaps fueled by the fact that ARM has more of a mix-and-match thing going on than Intel chips do.
aarch64 isn't really an equivalent category to x64, because it describes only one portion of the whole ARMv8 spec. ARMv8 still includes the 32-bit instructions and the Thumb. I realize you did mention Thumb, but you incorrectly indicated that it doesn't appear at all in ARMv8. As a counterexample, Apple's first 64-bit chip, the A7, supports all three instruction sets. This was how the iPhone 5S, which had an ARMv8 CPU, was able to natively run software that had been compiled for the ARMv7-based iPhone 5.
A better analogue to aarch64 would be just the long mode portion of x64. The tricky thing is that ARM chips are allowed to drop support for the 32-bit portions of ISA, as Apple did a few years later with A11. Like leeter said in the sibling post, though, x64 chip manufacturers don't necessarily have the option to drop support for legacy mode or real mode.
I think that's a fairly important distinction to make for the purposes of this discussion. I wasn't ever really talking about just aarch64; I was talking about all of ARM.
> Not only is it an incumbent switching to another architecture; it's an incumbent switching to another incumbent architecture. ARM is older than PowerPC and almost as old as the Macintosh itself; it came out in 1985.
> I gather that it's true that ARM hasn't been as good about backwards compatibility as some of its competitors, but was ARMv8 really so much of a jump from ARMv7 that one can't count it as part of the same line of processors anymore?
> I wasn't ever really talking about just aarch64; I was talking about all of ARM.
M1 is AArch64 only. You incorrectly brought ARMv8 into the discussion. AArch32 is irrelevant in the context of the M1.
Fair to highlight worse backwards compatibility but then you can't bring back AArch32 which Apple dropped years ago to try to claim that the M1 somehow uses an old architecture.
Is it? It's not like Apple moving MacBooks to M1 happened in a vacuum. M1 is only the latest in a whole series of Apple ARM chips, about half of which were non-aarch64.
That context actually seems extremely relevant to me; it demonstrates that Apple is not just jumping wholesale to a brand new architecture. They migrated the way large companies usually do: slowly, incrementally, testing the waters as they go. And aarch64 was absolutely not involved in the formative stages (which are arguably the most important bits) of that process. It hadn't even come into existence yet when Apple released their first product based on Apple Silicon. Heck, you can make a case that the process's roots go way back before Apple Silicon, all the way back to ~1990, when Apple first shipped the Newton.
Note, too, that the person I was originally replying to didn't say "M1", they said "Apple Silicon." In the interest of leaving the goalpost in one place, I followed that precedent.
I'd regard the fact no one seemed to notice that Arm has switched to a more modern 64 bit architecture (Aarch64) that has very little in common with its predecessors as being quite impressive.
We'll see. ARM architecture is now about 36 years old. I believe RISC V originated about 10 years ago. I think MIPS started about 40 years ago, but I believe it has finally ground to a stop.
Not sure why you'd say that - especially if you look at Arm v9 and the fact that the architecture is starting to make inroads into there server market.
RISC-V is open source which is great in some respects but also not helpful in others.
It's arguably a proto-RISC architechture (eg ADD has to be coded explicitly from CLC and one or more ADC, register file is memory locations 00-FF, etc), but it has little to do with ARM.
Edit: Granted, Sophie Wilson, one of the designers of ARM, is on record stating that 6502 didn't inspire anything in particular, beside being one of the few inputs to her pool of ideas (16032 and Berkeley RISC being the others): https://people.cs.clemson.edu/~mark/admired_designs.html#wil... So... arguably :)
powerpc/IBM is still a big player in the server/HP computing market. They do many cool things with their architectures since cost is less of a factor(dynamic smp, switcheable endieness, OMI) but they suck to build code for from an out-of-box experience standpoint.
This is the first I have heard of Apple doing this, and I feel like, in my position, I would have heard of this... I have just spent some time searching around myself trying to find any such reference and the closest I could find was the opposite: an article from Electrical Engineering Journal that said that Apple could have, but stated they didn't need to and pretty strongly implied they didn't, even going so far as to claim that they couldn't in any drastic way due redirections "even Apple" has on ARM licensees.
Can you provide some more information on this? I would love to be able to hit them on this, as this would actually be really upsetting to a lot of people I know who work on toolchains.
The rumor I've heard is that Apple is keeping their custom extensions to the ISA undocumented in deference to ARM's desire not to have the instruction set just completely fragment into a bunch of mutually incompatible company-specific dialects.
It's worth noting that the article you link predates the public release of the M1 by a good 10 months. Given how secretive Apple tends to be about these sorts of things, one can only assume that it was based almost entirely on rumor and conjecture.
Undocumented or not, they would be hard to hide: I would think you could scan through MacOS binaries and find them, if they exist. (I guess it's still possible they exist even if you don't find them, maybe unused or only produced by JITs, but that doesn't sound very useful.)
Yup. If you follow the links from that article, you'll get to the site of the person who found and documented them. It doesn't look like it took too much effort.
But it's not really about trying to prevent anyone from discovering that these opcodes exist. It's about trying to discourage their widespread use. If it's undocumented, then they don't have to support it, and anyone who's expecting support knows to steer clear. That gives them more freedom to change the behavior of this coprocessor in future iterations of the chip. And people can still get at them, because Apple uses them in system libraries such as the OS X implementation of BLAS.
Every ARM licensee does this though; they license the core designs from ARM and add features (including additional instructions) around it to package into an SOC. It’s just that Apple has the scale to design their own SOCs instead of buying one from Qualcomm or Samsung.
Which most - there is most as in number of cores shipped, and most as in number of organizations who have a license.
The second I have no doubt you are correct - I know of several organizations that have licensed ARM just to ensure they have a long term plan to get more without the CPU going obsolete again (one company has spent billions porting software that was perfectly working on a 16 bit CPUs that went obsolete - there was plenty of CPU for any foreseeable feature, but no ability to get more). These want something standard - they are kind of hoping that they can combine a production run with someone else in 10 years when they need more supply and thus save money on setup fees.
The first is a lot harder. The big players ship a lot of CPUs, and they the volumes to make some customization for their use case worth it. However I don't know how to get real numbers.
As for actual CPU architectures, there are only two that really matter at the moment: x86/AMD64 and ARM. It's of course very cool that ARM has proved itself flexible enough to be used from (almost) the smallest embedded devices to supercomputers (not to mention Apple M1), but there's not that much diversity as there was in the 80s either...