ARM vs. X86. Lets vote

To make things clear now: Which Architecture do you want for a Pandora Successor (assuming both are


  • Total voters
    104

Exophase said that Intel said that 90% of ARM specific Android code would work. There are so many ways to interpret that statement and considering its source I wouldn't automatically assume it is the most favourable one.
90% of ARM-only NDK code, very different from "90% of Android software." A lot of Android software either uses NDK code targeting x86 or it doesn't use NDK code at all. That number is only going to get lower in time, unless you have ARM ASM there's no strongly compelling reason not to include x86 (or MIPS for that matter) in an NDK build. For what it's worth, x86 has been an official part of NDK for about two years.
 
Last edited by a moderator:
So again, real-life benchmarks are still missing.
What -exactly- would satisfy that request?  Please show me, for an ARM platform, what -exactly- you're after?

Can you show equivalent materials for any modern ARM SoC that is equivalent to the publicly available information and benchmarks already available for the Z3770?

Z3770 Processor fact sheet.  No NDA needed to view.

http://www.intel.com/content/www/us/en/processors/atom/atom-z36xxx-z37xxx-datasheet-vol-1.html

Benchmarks:

http://us.hardware.info/reviews/4792/intel-atom-z3770-lbay-trailr-review-strong-competition-for-arm

Price: $37 per unit listed tray price

http://ark.intel.com/products/76760/Intel-Atom-Processor-Z3770-2M-Cache-up-to-2_39-GHz

Thermal design envelope:  2W

http://ark.intel.com/products/76760/Intel-Atom-Processor-Z3770-2M-Cache-up-to-2_39-GHz

Please provide equivalent information for any ARM platform that you consider to be competitive.
 
Breaking backwards compatibility is not so nice, but it's not the end of the world. Also, if internal storage is big enough, we should be able to install most of the software as regular packages from whatever distro we use, and just use that packaging system instead of PND for a lot of the software (e.g. web browsers, LibreOffice, etc). We can of course still use the PND system where it makes sense (e.g. for games), and either make a new repo for the P2, or adapt the PND spec to support multiple architectures.
 Isn't one of the foundations of the PND system, and one of the reasons we did not use a Repo, not writing to the NAND? I don't think we will just be able to use an existing repo w/ the Pandora 2.

I don't know why X86 compatibility is so important for you just because of closed source stuff.
It's not. Exophase sees x86 as a viable option and wants to make sure we will actually consider it. Grench and Monstercameron are the ones who want their closed source software so badly.
-God Ginrai
 
Last edited by a moderator:
Breaking backwards compatibility is not so nice, but it's not the end of the world. Also, if internal storage is big enough, we should be able to install most of the software as regular packages from whatever distro we use, and just use that packaging system instead of PND for a lot of the software (e.g. web browsers, LibreOffice, etc). We can of course still use the PND system where it makes sense (e.g. for games), and either make a new repo for the P2, or adapt the PND spec to support multiple architectures.
 
Isn't one of the foundations of the PND system, and one of the reasons we did not use a Repo, not writing to the NAND? I don't think we will just be able to use an existing repo w/ the Pandora 2.
Yes, that and the convenience of "plug-and-play" PND detection. We can keep the PND system, but for "standard" software like what you find in typical GNU/Linux distros, it's probably more convenient to use existing package managers, that keep track of dependencies etc. But NAND size and wear is probably going to be less of an issue anyway.

Anyway, all that is orthogonal to the x86 vs ARM discussion.
 
Breaking backwards compatibility is not so nice, but it's not the end of the world. Also, if internal storage is big enough, we should be able to install most of the software as regular packages from whatever distro we use, and just use that packaging system instead of PND for a lot of the software (e.g. web browsers, LibreOffice, etc). We can of course still use the PND system where it makes sense (e.g. for games), and either make a new repo for the P2, or adapt the PND spec to support multiple architectures.
 
Isn't one of the foundations of the PND system, and one of the reasons we did not use a Repo, not writing to the NAND? I don't think we will just be able to use an existing repo w/ the Pandora 2.
Yes, that and the convenience of "plug-and-play" PND detection. We can keep the PND system, but for "standard" software like what you find in typical GNU/Linux distros, it's probably more convenient to use existing package managers, that keep track of dependencies etc. But NAND size and wear is probably going to be less of an issue anyway.

Anyway, all that is orthogonal to the x86 vs ARM discussion.
The Pandora's NAND is so small that there simply isn't room to run a standard distribution, let alone allow people to load to NAND using standard repositories.

Modern NAND sizes are 4GB, 8GB, 16GB, 32GB, 64GB and even 128GB.  If I understand it correctly, better process and wear leveling takes care of that concern.

If it's equipped with a 32GB eMMC NAND module, that would be plenty for any major Linux distribution plus a good pile of software.

Getting 64GB or 128GB would likely be overkill - nice dream, but even I'll admit that they're 'out there'.  BUT - if the difference between 32GB or 64GB is < 2X the cost of the 32GB, then why not?  Those are decisions for further down the road.

Regardless of the SoC, I think we're going to want a lot more NAND - or to do away with it all together and put in an internal SD or microSD slot.

I don't see it as being a real driver in the ARM vs X86 thing though - it may be a reason to disqualify an SoC if it can't handle >=16GB NAND - but I doubt we'll find a modern SoC that isn't designed to handle it.
 
I don't know why X86 compatibility is so important for you just because of closed source stuff.
It's not. Exophase sees x86 as a viable option and wants to make sure we will actually consider it. Grench and Monstercameron are the ones who want their closed source software so badly.
-God Ginrai
Actually, I'm with Exophase on wanting honest consideration of this option.

I see -adding freedom- to the platform for people who want to run closed source software, or even an alternate OS if they're inclined to,  to be a bonus.  I have been asked to provide examples - and have.  The 198 and growing native Linux X86 Steam games and the native Linux X86 Neverwinter Nights 1 would both be good examples.

Honestly, I don't see what the fuss is about.

Linux users have complained for years that commercial game companies only release their stuff for Windows.  That isn't the case now - why wouldn't we want to be open to including them?  They're giving the Linux community exactly what we've been begging for for 10+ years.

Intel has been badgered by the Open Source community for years for their business practices.  There was much ado about the FSF and RYF certification - and we found an SoC that might have the highest credentials possible.  The fact that it is an Intel and has an X86 instruction set seems to make it taboo.  The behemoth 'evil' company (Intel) gave you/us -exactly- what we've all been asking for in the Z3770.  It's open, it's fast, it's low power draw, it's cheap.  Hate their past all you want, but consider this:  If this product line is successful - do you think they may be inclined to do more this way?

Change is scary - embrace change.

Hypothesis 1:  Intel, in the case of the Z3770 SoC, has a platform that is faster and more open and available than competing ARM SoCs.

Hypothesis 2:  Commercial games for Linux is a -good- thing and -enhances- freedom.

Hypothesis 3:  The Z3770 is a viable SoC for a handheld gamer-oriented general computing device.

Can you prove any of the above hypotheses wrong?
 
Last edited by a moderator:
Hypothesis 3 is the one of most interest and the only way to prove/disprove it is with more test data. If only every manufacturer actually produced a per/watt graph.
 
Hypothesis 2:  Commercial games for Linux is a -good- thing and -enhances- freedom.
Commercial and Indie are two different things. Most of the games on Steam for Linux are Indie games.

EDIT:

Hypothesis 3 is the one of most interest and the only way to prove/disprove it is with more test data. If only every manufacturer actually produced a per/watt graph.
Hypothesis 1 can't be proven for the same reasons. Need more data from both sides. (ARM/Intel benchmarks)

-God Ginrai
 
Last edited by a moderator:
I have nothing a priori against Bay Trail, but I need to see more information (on power consumption) before I can form an opinion. So far I've seen nothing more detailed than bullshit claims like "3W for web browsing, 2W for 1080p movie playback" which basically says nothing. That and Intel claiming it is very low-power.
http://www.anandtech.com/show/7314/intel-baytrail-preview-intel-atom-z3770-tested/2


With one thread pegged @ 2.4GHz "SoC power" is 800mW to 1.2mW. With four threads pegged (and a scaling in performance that's nearly 4x) "SoC power" is around 2.5W. This was using Intel's measurement systems, but unless they're ignoring significant power rails there's probably not that much to cheat. It should be isolating power consumed by other things. This means that the per-core power consumption at peak is ~1W at most, but in reality could be ~0.5-0.6W which is phenomenal. It also does very well in all of the CPU benches, best in class. Anandtech doesn't show Geekbench, but they do well there too (maybe not quite as well but about the same as the fastest ARMs out).


Anandtech's writing is pretty slanted towards Intel and Apple, but I think it's hard to see this as anything that isn't competitive, and certainly makes the claims that it must have terrible efficiency because it's x86 look silly.
~1W per core is better than what we have now: a 1GHz Pandora uses ~1.3W for its one core when in 100% use at 1GHz. So in terms of perf/W, this looks very good. If the measurement was done in a fair and correct way.
 
There have been lots of benchmarks so far. The state of mobile benchmarking isn't good, but I wouldn't consider it completely meaningless. Particularly instructive is the delta from Saltwell (previous Atom CPU) and Silvermont - because you know it's running the same x86 code, or at least code generated by the same compiler. There have been a lot of benchmarks of previous Atoms on Phoronix. Silvermont will probably show up on Phoronix as soon as there's a publicly available netbook or even Windows tablet available that uses it - so within a few weeks.

Everything I've seen so far is more or less in-line with my expectations. The perf/MHz is lower than Cortex-A15, which is what you'd expect since it's a more narrow design without as much reordering depth. But it makes up for that in being able to reach a higher clock speed using much less power. And despite that clock speed it has a low L2 cache latency and fairly low branch mispredict penalty compared to A15.

~1W per core is better than what we have now: a 1GHz Pandora uses ~1.3W for its one core when in 100% use at 1GHz. So in terms of perf/W, this looks very good. If the measurement was done in a fair and correct way.
Well, I think the measurement is actual a direct measurement of some power rails, maybe a summation of all the ones into the SoC (they do break it down into CPU vs GPU rails later), possibly missing some things. AFAIK yours was a power draw at the battery, so you have to account for some amount of regulation inefficiency, and you also must account for contribution of RAM, which will somewhat scale with clock speed as well. The other part of it is that NEON stress test is going to be really demanding and probably not a realistic representation of the load a normal program (even a NEON optimized one) will put on the CPU at full load. Would be very curious to see what numbers your test gave if you ran a loop of DraStic (you can do this in the latest version by setting the speed override to something really high and turning off frameskip, it'll use all of your CPU time this way - don't use fast-forward because that forces a big frameskip) - that'll be pushing it for a NEON optimized program.

All of that said, even if we were talking same manufacturing process Cortex-A8 was never really king of perf/W, even the numbers for Cortex-A9 look significantly better. Still does pretty well on Pandora though.
 
Last edited by a moderator:
I have nothing a priori against Bay Trail, but I need to see more information (on power consumption) before I can form an opinion. So far I've seen nothing more detailed than bullshit claims like "3W for web browsing, 2W for 1080p movie playback" which basically says nothing. That and Intel claiming it is very low-power.
http://www.anandtech.com/show/7314/intel-baytrail-preview-intel-atom-z3770-tested/2


With one thread pegged @ 2.4GHz "SoC power" is 800mW to 1.2mW. With four threads pegged (and a scaling in performance that's nearly 4x) "SoC power" is around 2.5W. This was using Intel's measurement systems, but unless they're ignoring significant power rails there's probably not that much to cheat. It should be isolating power consumed by other things. This means that the per-core power consumption at peak is ~1W at most, but in reality could be ~0.5-0.6W which is phenomenal. It also does very well in all of the CPU benches, best in class. Anandtech doesn't show Geekbench, but they do well there too (maybe not quite as well but about the same as the fastest ARMs out).


Anandtech's writing is pretty slanted towards Intel and Apple, but I think it's hard to see this as anything that isn't competitive, and certainly makes the claims that it must have terrible efficiency because it's x86 look silly.
~1W per core is better than what we have now: a 1GHz Pandora uses ~1.3W for its one core when in 100% use at 1GHz. So in terms of perf/W, this looks very good. If the measurement was done in a fair and correct way.
I don't know how fairly it was done - but even if we assume everything is unfair and off the mark - by a lot - like 25%...

From the few benchmarks that are out, single core processing power looks to be 5 to 17.6 times faster.  Take the smaller estimate, remove 25% = 4x faster.

From that 1W per core estimate - add 25% and call it 1.25W ~ existing Pandora.

In this 'worst case' scenario, single core performance per watt is 4 times better than the Pandora.  It has 4 cores.  The power usage adding cores is non-linear in the SoC's favor.

I'd say that looks very very good.

Exophase is right though - the first tablets to come out with this will tell us a very interesting story.  I hope they don't royally mess them up in a race to be 'cheaper than X'.
 
I want a Motorola 68000 (that wasn't discussed).
While we're bringing up old processors, I would love to see an RCA COSMAC 1802.

Or, being more practical, the PowerPC architecture
 
Last edited by a moderator:
Let's say for the sake of argument that Intel's Z3770 is superior in all aspects to anything any ARM vendor could provide.

A ) Is there actually any precedent Intel would sell OpenPandora GmbH modern Z-series Atom processors, and give the associated support?

This seems to being assumed as a thing, but I've seen nothing brought forward to support this.  The SKUs we're talking about aren't exactly something you order off Newegg.

B ) Is there any reason to presume that if they would sell OpenPandora GmbH a modern Z-series Atom processor, and provide the associated support, that they'd sell the top binned processor in that line aka the Z3770 that is being tossed around?

I know people like to obsess over the top bin and benchmarks, but the top bin isn't the only SKU in a product line.

C ) Is there any issues that need to be worried about associated with sourcing components that need to be done with associated board to support the aforementioned associated with things like firmware and chipset assets?  Yes it's an SoC, but so is a OMAP3 and there's additional supporting assets on the current board for various reasons.
 
Last edited by a moderator:
Those questions can (and should) be asked about anything that is considered. All anyone wants right now is for ED to contact Intel and find out. I would say there's at least some precedent simply because Intel has always been more situated for this sort of distribution and because we've seen announcements (if not releases) of similar scale device using Oak Trail.
 
Fair enough.

If I was ED I'd appreciate two things associated with that.  First would be any insights the community has on _the_ person to contact about that at Intel given the general line doesn't always yield the best results, and we're talking about a harder sell then most seem to want to admit.  Second would involve the package maintainers, like yourself, giving an assessment of what we're really looking at in terms of adapting the software ecosystem, what would actually be lost, and how big of a hit we're looking at with such an effort.  He is overworked making little boxes of wonder in addition to the work that puts food on his table after all, and not exactly in the position to have all the time in the world to check on these things himself.

At this point the issue seems to be less one of finding the best SoC, and more one of getting a foot in the door to be able to get hands on a reasonably modern SoC that has significant benefits versus the current OMAP3 without issues with support or other complications that will make it a decision that is regretted later.

Personally I'm of the opinion that processing power isn't as much an issue as improvements in terms of unit cost without loss of capability, given price remains a barrier even for those sold on the concept.  Even if the most ideal view of the potential increase in procesinng power is taken, I question how much additional capability in practice we're really looking at versus OMAP3 or between the different options that involve modern designs.  LibreOffice runs as do most programs practical to use on a handheld already, and the the jump is not enough to generation leap the emulated consoles.  Mainly it would seem that DOS and Windows games would see a jump in their compatibility list, and the processor wouldn't have to work as hard to handle the existing loads.

So again, real-life benchmarks are still missing.
To the extent we have them, read Anandtech, the Apple A7 SoC in the iPhone 5S is highly competitive with or tends to beat Intel's Z3770 in a 10" tablet despite the Apple A7 having half the cores, being clocked lower, having platform constraints against it given where Anandtech would have appeared to get their reference data that the Z3770 can abuse to exploit Turbo, 28nm versus 22nm process, etc.  I don't see a reason to believe there's anything particularly special about Apple's adaptation of the A57 reference design versus what we'll see as the rest of the market shifts that way, and hence the exuberance about Intel seems odd beyond it's that time in the cycle again.

It's noteworthy that what seems to be being put out there for the Z3770 is focusing on the high usage profile where it would be comparing with the Cortex A57/A15 derivative, and ignores the Cortex A53/A7 part of the equation which fits with how I'd expect them to play things to and fits my predictions regarding a deep sleep state/race to execute strategy.  The thing is the Cortex A53/A7 parts play right into where a lot of Pandora loads actually operate.

I'd also note that unless I'm reading this wrong Turbo on the Baytrail can spike up to 90 degree's Celsius.  Probably going to want a heat spreader to help avoid potential for warping the plastic if you wanted to use one in the Pandora.  Based on that it looks like it does have a lot of nice features, but at the very a a wireless chip will be needed beyond it.
 
Last edited by a moderator:
Fair enough.

If I was ED I'd appreciate two things associated with that.  First would be any insights the community has on _the_ person to contact about that at Intel given the general line doesn't always yield the best results, and we're talking about a harder sell then most seem to want to admit.  Second would involve the package maintainers, like yourself, giving an assessment of what we're really looking at in terms of adapting the software ecosystem, what would actually be lost, and how big of a hit we're looking at with such an effort.  He is overworked making little boxes of wonder in addition to the work that puts food on his table after all, and not exactly in the position to have all the time in the world to check on these things himself.

At this point the issue seems to be less one of finding the best SoC, and more one of getting a foot in the door to be able to get hands on a reasonably modern SoC that has significant benefits versus the current OMAP3 without issues with support or other complications that will make it a decision that is regretted later.

Personally I'm of the opinion that processing power isn't as much an issue as improvements in terms of unit cost without loss of capability, given price remains a barrier even for those sold on the concept.  Even if the most ideal view of the potential increase in procesinng power is taken, I question how much additional capability in practice we're really looking at versus OMAP3 or between the different options that involve modern designs.  LibreOffice runs as do most programs practical to use on a handheld already, and the the jump is not enough to generation leap the emulated consoles.  Mainly it would seem that DOS and Windows games would see a jump in their compatibility list, and the processor wouldn't have to work as hard to handle the existing loads.

So again, real-life benchmarks are still missing.
To the extent we have them, read Anandtech, the Apple A7 SoC in the iPhone 5S is highly competitive with or tends to beat Intel's Z3770 in a 10" tablet despite the Apple A7 having half the cores, being clocked lower, having platform constraints against it given where Anandtech would have appeared to get their reference data that the Z3770 can abuse to exploit Turbo, 28nm versus 22nm process, etc.  I don't see a reason to believe there's anything particularly special about Apple's adaptation of the A57 reference design versus what we'll see as the rest of the market shifts that way, and hence the exuberance about Intel seems odd beyond it's that time in the cycle again.

It's noteworthy that what seems to be being put out there for the Z3770 is focusing on the high usage profile where it would be comparing with the Cortex A57/A15 derivative, and ignores the Cortex A53/A7 part of the equation which fits with how I'd expect them to play things to and fits my predictions regarding a deep sleep state/race to execute strategy.  The thing is the Cortex A53/A7 parts play right into where a lot of Pandora loads actually operate.

I'd also note that unless I'm reading this wrong Turbo on the Baytrail can spike up to 90 degree's Celsius.  Probably going to want a heat spreader to help avoid potential for warping the plastic if you wanted to use one in the Pandora.  Based on that it looks like it does have a lot of nice features, but at the very a a wireless chip will be needed beyond it.
That data sheet is 291 pages long - do you have a page reference for the 90C?

I suspect that with any current generation (modern) SoC there will need to be some considerations to a heat spreader and maybe even a passive radiator on an outside edge of the case if the intent is for the unit to be capable of running 'flat out' for extended periods.
 
Fair enough.


If I was ED I'd appreciate two things associated with that.  First would be any insights the community has on _the_ person to contact about that at Intel given the general line doesn't always yield the best results, and we're talking about a harder sell then most seem to want to admit.  Second would involve the package maintainers, like yourself, giving an assessment of what we're really looking at in terms of adapting the software ecosystem, what would actually be lost, and how big of a hit we're looking at with such an effort.  He is overworked making little boxes of wonder in addition to the work that puts food on his table after all, and not exactly in the position to have all the time in the world to check on these things himself.
ED has contacted a lot of other companies, I figure he'd have some idea of this, more than I anyway.

I don't really know what would go down with the repo - I for one would prefer that we used a proper apt repository added to some standard distribution to begin with. That kind of changes things. It could be suitable to add fat binaries to PND. I don't really know a lot about how PND works, I think skeezix could give some insight on this. Whatever the case I don't think this is something ED himself would have to invest a lot of time towards.

To the extent we have them, read Anandtech, the Apple A7 SoC in the iPhone 5S is highly competitive with or tends to beat Intel's Z3770 in a 10" tablet despite the Apple A7 having half the cores, being clocked lower, having platform constraints against it given where Anandtech would have appeared to get their reference data that the Z3770 can abuse to exploit Turbo, 28nm versus 22nm process, etc.  I don't see a reason to believe there's anything particularly special about Apple's adaptation of the A57 reference design versus what we'll see as the rest of the market shifts that way, and hence the exuberance about Intel seems odd beyond it's that time in the cycle again.

It's noteworthy that what seems to be being put out there for the Z3770 is focusing on the high usage profile where it would be comparing with the Cortex A57/A15 derivative, and ignores the Cortex A53/A7 part of the equation which fits with how I'd expect them to play things to and fits my predictions regarding a deep sleep state/race to execute strategy.  The thing is the Cortex A53/A7 parts play right into where a lot of Pandora loads actually operate.


I'd also note that unless I'm reading this wrong Turbo on the Baytrail can spike up to 90 degree's Celsius.  Probably going to want a heat spreader to help avoid potential for warping the plastic if you wanted to use one in the Pandora.  Based on that it looks like it does have a lot of nice features, but at the very a a wireless chip will be needed beyond it.
The processor A7 is using has nothing to do with Cortex-A57. Apple and Qualcomm (and Marvell, and soon nVidia) don't modify ARM CPU cores, they're fully designed in-house, so there's very much something special about it. And sure it performs well despite having half the cores... in tests that don't use more than two cores. This is a pretty old story - none of the CPU tests AT performed are multithreaded at all. Furthermore they're all highly browser dependent and they're using a totally different browser, so they're pretty meaningless.

They do include Geekbench numbers but for some reason only for the Apple products. This could be because they didn't actually get to properly review the BayTrail reference tablet, instead only running what Intel allowed them to run at a convention.

But yeah, A7 is very nice, but there's no point daydreaming about getting anything resembling it in an available SoC. Before even talking about A57 (+ A53 big.LITTLE) there needs to be an SoC, so far nothing has even been announced - I'd be surprised if we saw anything available before mid-2014. That's a really long wait.
 
Last edited by a moderator:
They do include Geekbench numbers but for some reason only for the Apple products. This could be because they didn't actually get to properly review the BayTrail reference tablet, instead only running what Intel allowed them to run at a convention.
iPhone Geekbench 3 multithread score 2564 

from http://anandtech.com/show/7335/the-iphone-5s-review/6

Z3740 Geekbench 3 multithread score 2620 

Z3770 Geekbench 3 multithread score 3093

from http://www.tabtech.de/windows/dell-venue-11-pro-5130-mit-bay-trail-z3770-durch-benchmark-geleakt

 

Determine relevance as you see fit.
 
The complete scores aren't really useful because the integer parts are so skewed by crypto acceleration. Really need the subtest scores instead.
 
Back
Top