Power Efficiency Comparison: ARM, x86 and MIPS


ekianjo

Hardcore Member
Joined
May 7, 2012
Messages
8,261
Location
神戸市、日本 (Japan)
Last edited by a moderator:
They used GCC 4.4 ?! Wow, that's old (especialy for the A15). I get much better performance on the Pandora using newer version, as the ARM support is progressing. So I guess that could change things...
 
Going from 4.6 to 4.8 made a huge difference in the generated FPU code for me.
 
Interesting comparison, but of course it's mostly a compiler test.

Also the setup is kind of limited: they use only one fixed clock speed per architecture (while in reality, the interesting bit is how perf/W scales with clock speed), and they only use artificial benchmarks that, like a cpu stress test, keep the cpu 100% busy for a while, until they're done. That tells you something about how cpu-limited stuff will behave in terms of power consumption, but not much about most of the realistic use cases. The important thing for power consumption is how good it is at being idle, and at switching between being busy and idle.
 
well, two A15 cores draw about 1.9 Watt at 2GHz, without the rest of the SoC! The A12 (~40% faster then an A9) and the A17(~60% faster than an A9) are way more efficient. Additionally Debian stable (old kernel) might not be so battery friendly. Linux still needs to catch up in terms of power saving mechanisms and newer kernel versions help a lot. But hey, at least in terms of processing power the OMAP5 seems to be a beast ; )

Intersting quote from the article:

"Companies that try to claim RISC still has enormous benefits over x86 at higher performance levels are explicitly ignoring the fact that RISC and CISC are terms that describe design strategies and that those strategies were formed in response to technological limitations of the day. [...]

RAM costs could dwarf the cost of other system components and compilers were primitive. Programmer-friendly architectures were a response to these constraints.

Meanwhile, RISC chips could run at significantly higher clocks than their CISC counterparts thanks to reduced complexity — but that’s no longer true today"
 
Last edited by a moderator:
Well, the OMAP5 is still more power efficient than the OMAP3.


One A15 core with about 300MHz should have roughly the power of the A8 core running at 1,2GHz.


Of course two A15 cores at 2GHz need more power than one A8 at 1GHz, but for the same task, the OMAP5 needs less power than the A8.


That benchmark - as most benchmarks - is not really usable in my opinion as well.


GCC (especially older versions) is more optimized for x86, so the optimized code for ARM is not as optimized as it could be.


So their result is biased towards x86. It shows when using an x86 with that specific usecase using that old compiler is more power efficient.


But that's not the chip, that's the code.


I remember my old Netbook had a battery time of 2 hours with Linux, and then with some kernel upgrade it went up to 4 hours.


The hardware itself surely didn't get more power efficient, but the software was more optimized.
 
They used GCC 4.4 ?! Wow, that's old (especialy for the A15).
Yeah gcc 4.4 doesn't even know A15 exists, and compiles for soft float only. I guess they compiled for default ARM target which is ARM9 with soft float, or something. Pretty lame..
One A15 core with about 300MHz should have roughly the power of the A8 core running at 1,2GHz.
This just can't be true, unless you have floating point or memory bandwidth in mind..
I remember my old Netbook had a battery time of 2 hours with Linux, and then with some kernel upgrade it went up to 4 hours.

The hardware itself surely didn't get more power efficient, but the software was more optimized.
Not really, more likely old kernel wasn't enabling power saving features in hardware (similar to what we had on pandora after release and what moving to 3.2 kernel improved). It's not optimization in software itself for sure.
 
Last edited by a moderator:
They used GCC 4.4 ?! Wow, that's old (especialy for the A15).
Yeah gcc 4.4 doesn't even know A15 exists, and compiles for soft float only. I guess they compiled for default ARM target which is ARM9 with soft float, or something. Pretty lame..
Oh... that explains quite a bit ;)

One A15 core with about 300MHz should have roughly the power of the A8 core running at 1,2GHz.
This just can't be true, unless you have floating point or memory bandwidth in mind..
Well, not sure what that was based on, but somewhere on ARM there was an information how much better the performance per MHz is of A9 compared to A8 and of A15 compared to A8.

I just used that basis as calculation for the final result.

No idea whether that was just marketing talk and what exactly they compared.

I remember my old Netbook had a battery time of 2 hours with Linux, and then with some kernel upgrade it went up to 4 hours.

The hardware itself surely didn't get more power efficient, but the software was more optimized.
Not really, more likely old kernel wasn't enabling power saving features in hardware (similar to what we had on pandora after release and what moving to 3.2 kernel improved). It's not optimization in software itself for sure.
Well, could it also have be that some Atom features weren't properly implemented in the compiler back then?

I remember it was one of the very first Atoms out there, so the compiler probably wasn't optimized for Atom as well.
 
It's annoying that the actual paper that article summarizes, the paper by "the team from the University of Wisconsin", does not seem to be available yet. Supposedly it is not yet published. Thanks to Google, not the author of the article, I've managed to track down the older version of the paper (the one without the A15 and without the Bobcat and Loongson), it is over here: http://www.cs.wisc.edu/vertical/papers/2013/hpca13-isa-power-struggles.pdf

I really don't like it when people present things without giving a decent pointer to the source material.

According to http://en.wikipedia.org/wiki/Comparison_of_ARMv7-A_cores, a Cortex-A15 performs 3.5 to 4 DMIPS/MHz (per core) while a Cortex-A8 does 2.0 and a Cortex-A9 does 2.5. This is for plain integer stuff, no floating point (where A15 should be much better).

So roughly, a 1GHz A8 should correspond with a 800MHz A9 and with a 500 to 570 MHz A15.

Obviously an A15 at 1.66 GHz consumes way more power than an A8 at 600MHz. I can't find any solid numbers, but according to the graph (Fig. 8) it would consume between 4 and 7 times more power (while doing about 5.5 times more work, or in the case of floating point stuff, about 9 times more work).

However, those are just two data points. On the Pandora we run the A8 at anything between 500 and 1200 MHz, and we have two different SoCs with slightly different power characteristics. On the Pyra I would expect us to also have a broad range of clock speeds. For the A8 @65nm (like the ones in the CC and ReBirth units), 500MHz is pretty much the optimal point in terms of performance per energy: if you clock it higher, the extra consumption is larger than the extra performance; if you clock it lower, the improved consumption does not compensate for the loss in performance. So if you just need to get a task done (e.g. start a browser, render a page), your battery will lose the least amount of juice if you do it at 500MHz. For the A8 @45nm (like the one in the 1GHz units), the optimal point is around 600MHz but in the 1-1.2GHz range it also performs well.

So I'm interested at what the perf/W is of the A15 at other clockspeeds.
 
Last edited by a moderator:
_wb_ is right, perf/W is not a linear curve and with some SoCs like Exynos w/A15s on the voltage is increased a lot to get the last few hundred MHz. Dynamic power consumption scales quadratically with voltage, so this is a big deal.

I've always thought that this paper had some interesting raw data (like branch misprediction rates, cache misses, etc) but made some terrible conclusions from a scientific point of view. You can't say that ISA has zero impact by looking at totally different CPUs with about a million different variables that aren't controlled. Maybe if you were testing thousands of samples you could try to deduce some kind of trend filtered from the noise but that's not the case. You could draw a realistic conclusion that ISA doesn't dominate performance and efficiency and of course isn't the only factor, but who was ever saying that? Just because you can get a superior CPU doesn't mean the ISA is equal, it could be that with a superior ISA your CPU could be even better. If there was say, a systematic 10% efficiency disadvantage in using x86 over ARM for a design where everything else is held equal there's no way this study would be able to reveal it.

And like others said, still using GCC 4.4 in a 2014 paper is a mortal sin, especially knowing that it's more biased against ARM than x86, and even worse they disabled any kind of SIMD utilization.

One A15 core with about 300MHz should have roughly the power of the A8 core running at 1,2GHz.
How did you come up with that? D: I think the difference in perf/MHz will be closer to 2x, not 4x. This reflects my experiences testing on Pandora vs the Exynos 5250 Chromebook.

Some code like recompiler output from an emulator will have a higher than average improvement because it tends to be poorly scheduled for Cortex-A8, and is more sensitive to instruction side prefetch and branch prediction.

Other code like aggressively scheduled integer NEON (like in PCSX-reARMed, DraStic) will have a smaller than average improvement.
 
Last edited by a moderator:
One A15 core with about 300MHz should have roughly the power of the A8 core running at 1,2GHz.
How did you come up with that? D: I think the difference in perf/MHz will be closer to 2x, not 4x. This reflects my experiences testing on Pandora vs the Exynos 5250 Chromebook.


Some code like recompiler output from an emulator will have a higher than average improvement because it tends to be poorly scheduled for Cortex-A8, and is more sensitive to instruction side prefetch and branch prediction.


Other code like aggressively scheduled integer NEON (like in PCSX-reARMed, DraStic) will have a smaller than average improvement.
Obviously the exact factor depends on the kind of mix of instructions. For floating point stuff, the improvement in perf/MHz could be a factor 4. For integer stuff, probably a factor 2. You can probably construct artificial examples where it gets worse perf/MHz and other artificial examples where it gets a factor 10 improvement.
 
Obviously the exact factor depends on the kind of mix of instructions. For floating point stuff, the improvement in perf/MHz could be a factor 4. For integer stuff, probably a factor 2. You can probably construct artificial examples where it gets worse perf/MHz and other artificial examples where it gets a factor 10 improvement.
Yeah, so I listed a couple real world examples that I know influence at least a few emulators. You might get this huge benefit on code that is dominated by VFP, but how much do you think such things are ran on Pandora?
 
Back
Top