Future of the CPU board?


Nope, most if not all of the current 64-bit ARMv8 processors can run 32-bit ARMv7 code in a kind of compatibility mode. All a 64-bit ARM Linux platform would need to provide is some 32-bit armhf libaries using multilib and existing 32-bit Pyra software should run fine. In fact, this is pretty much the only option as one of the major reasons for upgrading would be for software that only supports ARMv8.
 
@daveshah : thank you for your quick reply :)

ARMv8, here we come !!! :D

Cheers, Samuel

EDIT: all I want to avoid is community fragmentation, i.e some would own a first batch Pyra with the Cortex-A15 (32 bits / ARMv7), while some others would own an upgraded Pyra with a new CPU board (64 bits / ARMv8). I understand now that the newer model would be able to run legacy applications, but the contrary I'm afraid would not be true, right ? Does that mean developers would have to provide both 32 bits and 64 bits versions of their applications ?
 
Last edited:
No, if there's a functional compatibility layer, devs should only need to ship the 32-bit version assuming that's what they made first. Unless there's a benefit from rebuilding it for 64-bit instructions, although I'm not 100% sure that that would be. 64-bit mode will give you general purpose registers that can take bigger numbers, but you could already get bigger numbers in your NEON registers. So maybe it'll just mean people can compile with 64-bit numbers rather than having to roll hand crafted NEON code, which as far as I know compilers still don't spit out. But if you've already gone to the effort of writing that NEON code, I'm not sure what the benefits are other than a simpler job in compiling updates down the line.
 
The difference between 32-bit ARMv7 and 64-bit ARMv8 is more than just the data-path width, there are also twice the number of registers in 64-bit mode and some other changes including to NEON and VFP. As the processors are also likely to be tuned to run native new 64-bit code better than legacy 32-bit code, I suspect there would be benefits to recompiling in many cases, unless stuff is hand-crafted to ARMv7 (or closed source).
 
Hmm, that might present a problem with the 3d drivers for example. Say the new chip requires a binary blob that need you to stuff the 64-bit registers with values and call the binary blob in 64-bit mode, you'd need a much more complex 32-bit open source shim that does the actual calling, which they may well IMO not provide for us. IIRC you need to go into supervisor mode or some hypervisor mode to switch bitwise contexts.
 
As the processors are also likely to be tuned to run native new 64-bit code better than legacy 32-bit code
As everything beyond the decoding phase is essentially handled by independent microcode anyway, it really shouldn't matter that much as long as the instructions don't clog the pipeline unnecessarily.

The basic 32bit ARM instructions really exploited the available range of values, they are incredibly versatile and therefore allow for very densely packed code - they were very clean and logically structured as well, which greatly eases the workload of decoding them. It made extensions quite a PITA, though, as the essential ISA hardly left any gaps for more instructions. The ARM64 ISA is extremely condensed compared to the old 32bit ISA, the instructions are very basic and you often need a lot more code to do the exact same thing.
 
The OMAP5 is a classic, even the Neo Freerunner has it and I love that phone except for the fact that it has a bug were it cannot charge once the battery runs out. I sincerely hope that the Pyra will not be auto bricked if the battery goes fully dry.
 
I think the Neo Freerunner used an OMAP3. The Pyra is pretty much the only consumer product using the OMAP5, as it was released shortly before TI laid off the OMAP team and stopped marketing it due to increased competition from Qualcomm Snapdragons. But some cars use an OMAP5-derived SoC (DRA7xx and TDA2 series) for their head units and possibly driver assistance systems too (not aware of a confirmed user for the latter though).

The other possibility is that when you are (were?) in a restaurant, your order might have been entered with an OMAP5.
 
As everything beyond the decoding phase is essentially handled by independent microcode anyway, it really shouldn't matter that much as long as the instructions don't clog the pipeline unnecessarily.

The basic 32bit ARM instructions really exploited the available range of values, they are incredibly versatile and therefore allow for very densely packed code - they were very clean and logically structured as well, which greatly eases the workload of decoding them. It made extensions quite a PITA, though, as the essential ISA hardly left any gaps for more instructions. The ARM64 ISA is extremely condensed compared to the old 32bit ISA, the instructions are very basic and you often need a lot more code to do the exact same thing.
The ARM is a RISC chip, and at least in 32-bit mode traditionally is designed to use only 1 clock cycle for every instruction beyond accesses to non-cached memory. I can't find a statement saying that all 64-bit instructions do the same, but I'd kind of assume it's the same. The pipeline will stall when you ask it to load an address it hasn't cached already, and it'll be invalidated by a branch it didn't expect, but beyond that it runs a lot more stably than other chips with variable length instructions.

The 64-bit mode does eliminate the ability to conditionally evaluate most instructions other than branches. But that was introduced because originally all branches would invalidate the pipeline, but since then branch prediction has revolutionised the model. In my experience conditional instructions are only used for if statements with short consequent and alternatives such as:
Code:
if percent<11 then percent=0 else percent=percent*2
(in pseudocode) could be written in ARM32 as:
Code:
CMP R0,#11 // If R0 compared with 11
MOVLT R0,#0 // is less than, then replace it with 0
ADDGE R0,R0,R0 // else double it

In 64-bit ARM you'd have to do something like:
Code:
CMP X0,#11
BGE .else
MOV X0,#0
B .end
.else
ADD X0,X0,X0
.end
...
(using 32-bit style instructions that I'm not sure are valid any more). These days, if it's likely to happen or not, the chip if able to either assume the conditional branch happens or doesn't. It'll reset the program counter to read the post branch instruction if it's likely to take that branch without invalidating the pipeline. In this specific simple example, I assume the branch instructions even if they're predicted correctly will still take one place in the pipeline, and thus take a clock tick to be dealt with (and actually evaluated in case the branch prediction got it wrong). But if you have any more instructions in either places of the it-then-else structure, and branch prediction successfully guesses the outcome, then that's only one instruction you need to process, rather than however many invalidated instructions you're told to skip. Traditionally, in 32-bit mode, you'd do it more like the 64-bit example anyway, if the number of skippable instructions in either case were more than your pipeline length less one, because in that case,the time taken to rebuild the pipeline would be less than the time taken to skip all of those invalid instructions. Branch prediction has reduced that need to somewhere between one instruction and the length of the pipeline less one depending on how effective it is.

Edit: There's also the dropping on certain bit rotation effects on specific instructions. Rather than every instruction being able to apply any rotation on the last operand different instructions only permit certain rotations. I don't know why that's happened, and I've not looked into the instruction packing yet so that might help explain what they've done here. Otherwise seems weird, other than I never used the ROL and ROR forms without explicitly knowing what I was doing.
 
Last edited:
The ARM is a RISC chip, and at least in 32-bit mode traditionally is designed to use only 1 clock cycle for every instruction beyond accesses to non-cached memory.
...except that the very idea of a pipeline is exactly to not execute a whole instruction within a single clock cycle, you already need at least a single cycle for decoding the instruction. This may hold true for ancient non-pipelined architecture versions, but it has nothing to do with current ones, especially the 64bit ones. The Cortex-A15's pipeline already has at least 15 stages - keeping such a long pipeline saturated is not an easy task, even with perfect branch prediction, an out-of-order architecture and no upcoming memory accesses, you'll always have some wait cycles in between.

The pipeline chops each instruction into the basic tasks that match its stages. The microcode is what controls which stages execute and which kind of operation they perform once the instruction has been decoded. There is no hardware implementation behind the instructions anymore, they are all implemented in "software" within the microcode, which always requires an initial decoding stage to determine what part of the microcode needs to be executed. Implementing a sub-task directly into another instruction will save you executing some pipeline stages like the mandatory decoding stage, allowing for a much denser packed pipeline and therefore a higher instruction throughput.
 
There is no hardware implementation behind the instructions anymore, they are all implemented in "software" within the microcode
Huh...?
Are you saying that assembly is no longer a machine language in which the opcodes directly correspond with a set of hardware instruction gates but go through a translation layer of some sort?

I'm confused :$
 
Are you saying that assembly is no longer a machine language in which the opcodes directly correspond with a set of hardware instruction gates but go through a translation layer of some sort?
Well, the microcode is basically just a list of states that tell the CPU which stages and modes of operations to activate. Each time you have to spend more than a single cycle on something you need something to keep track of what to do next, anyway. But still, it can be updated afterwards to fix certain CPU bugs, just like normal software.

The whole pipeline & microcode concept might seem overly complicated and maybe even slower that a classic non-pipeline architecture concept at first glance, but it greatly reduces the signal path length and therefore allows for much higher clocks.
 
Last edited:
Are you saying that assembly is no longer a machine language in which the opcodes directly correspond with a set of hardware instruction gates but go through a translation layer of some sort?

Yes.

I know that, for at least x86, assembly/machine-language opcodes have not corresponded directly to ALU/MMU/whatever actions for a long time. Rather, machine-code instructions are translated in hardware into so-called µops (micro-ops) that each stand for some very basic operation, such as add, load memory (or perhaps two: set memory address and load from set address into ALU?), load register into ALU (I imagine some kind of stack-based ALU, here, which may be highly-unrealistic -- and the register may be only indirectly related to the register in the original machine-code instruction, because of register renaming: perhaps an x86_64 machine has in fact 128 general purpose registers internally, though the ISA only provides 16?) etc..

Continuing with the (possibly ridiculous) pseudo-stack based ALU example, an add instruction in an imaginary instruction-set-architecture may be "ADD r1 r2 r3", with r1 being the destination register. PERHAPS this would be translated into something like:
Code:
TOALU1 r22             (from renamed register)
TOALU2 r65             (from renamed register)
ADDU32                 (add unsigned 32-bit integer)
FROMALU r18         (to renamed register)
These four µops would all be put into the pipeline, and perhaps two ADD instructions could be executed simultaneously (one doing TOALU while the other does FROMALU, mayhap) on the same ALU.

I am not a specialist, just a user-mode C programmer, so I await correction. :)
 
Yeah, I'm no expert either, but I have the feeling x86 and possibly many other architechtures are more like some kind of VLIW machine running an interpreter for a legacy instruction set. The interpreter is coded in the microcode. I seem to remember to have heard that POWER does not use microcode, but I suppose that means they have no microcode updates, more than that there is really no microcode. But I don't have much clue...

It's abstractions all the way down (because at the bottom there's quantum physics and that's also abstract, right?).
 
...except that the very idea of a pipeline is exactly to not execute a whole instruction within a single clock cycle, you already need at least a single cycle for decoding the instruction. This may hold true for ancient non-pipelined architecture versions, but it has nothing to do with current ones, especially the 64bit ones. The Cortex-A15's pipeline already has at least 15 stages - keeping such a long pipeline saturated is not an easy task, even with perfect branch prediction, an out-of-order architecture and no upcoming memory accesses, you'll always have some wait cycles in between.
That's a good point. But timings are normally quoted as if the pipleline was full and working optimally, in which case if an instruction comes to the last stage of the pipeline, and there's one in the previous stage as well, it'll come out on tick before the next.

As far as I know, all ARM chips have always used a pipelined design FWIW. I'm not sure about the ARMv1; as far as I'm aware that was a coprocessor for the BBC micro only. But the ARMv2 ISA had a three stage pipeline, and Digital's StrongARM had something like an 8 step pipeline. As you say that's tended to grow as time goes on. I note on reading the ARMv8 ISA that that instruction set has a memory prefetch instruction, which is intended to populate the cache ahead of time, but I can't find anything similar in my copy of the ARMv6 architecture reference manual, and whether it was introduced in ARMv7 I don't know, but presumably that's designed to reduce bubbling that occurs in pipelines due to accessing slow memory.
 
  • Like
Reactions: rSl
Back
Top