possible soc updates.


^ I don't know, the context in which he makes the statement is directly in regards to Socs.  I guess we will see in about 3 months time when they start hitting the market.
I don't think you understand, going forward all Atom products will be SoCs. Still means nothing about the phone and/or tablet platforms.

I don't think they're realistically going to be in a device in three months. Maybe 5-6.
 
Last edited by a moderator:
Even if it's 'only 32bit' it doesn't mean it's 'worse' than the ARM products.

I'm betting that they have one platform and will disable chunks of it for different applications.  64bit gets cut for tablets/phones.  Mobile connectivity gets cut for blade servers.

The theoretical beast configuration of 64bit + mobile all turned on and active probably won't exist until they've exhausted the revenue stream of 32bit mobile devices.
 
No one said being 32-bit makes it worse than ARM. I for one don't think 64-bit will be that critical for the next Pandora, even if it's nice to have. It is a little more beneficial on x86 since it enables extra registers, but that's not exactly critical either.

Silvermont looks like a very capable core that could easily give ARM SoCs available at the time a run for their money in CPU performance, on top of enabling x86 compatibility. It'd also make a lot of ARM-specific optimizations we made for Pandora software worthless, so I guess that'll have to be weighed in with everything else.

On that note, I do want to offer caution that I doubt Silvermont will offer quite the blowout Intel says it will. At least not for software that is really well optimized for both. Intel has been exploiting the hell out of stronger compilers on the x86 side.. a great example is AnTuTu where they somehow got the developers to use ICC to compile it (probably with $$$, and doing the work for them) while the ARM is a totally untuned GCC with no vectorization at all. Of course the caveat in all this is that no one else uses ICC on Android. The mobile benchmark scene right now is full of crap like this, everyone's doing it. nVidia has also released some really astoundingly unrealistic numbers comparing Tegra 4 and Snapdragon S600.
 
No one said being 32-bit makes it worse than ARM. I for one don't think 64-bit will be that critical for the next Pandora, even if it's nice to have. It is a little more beneficial on x86 since it enables extra registers, but that's not exactly critical either.
Agreed.  Although it would be hilarious to have a 64bit multi-core X86 computer with 64GB of RAM and blah blah blah specs, it would be rather pointless.

A 32bit X86 with 2-3 GB (video shared) RAM on a tightly integrated SoC with 2ghz Core 2 duo (E4400) or better performance inside 2-3 times a current ARM SoC envelope - I could see that happening.  The E4400's TDP is 65W.  The Atom S-160 max TDP is 8.5W and performs ~ 90% of the E4400 levels.  I haven't found any reference to the power/dissipation requirements of the new 22nm SoC.

It is a long distance from that kind of power drain to the existing Pandora though.  I have a 700mA micro-USB charger from an old Blackberry that can push enough DC power into my Pandora to run and charge it - so the whole Pandora runs in a smaller window than 1/10th the last generation of Atom chip's TDP requirement.

Does anyone know the TDP for the new 22nm atom SoC?
 
A 32bit X86 with 2-3 GB (video shared) RAM on a tightly integrated SoC with 2ghz Core 2 duo (E4400) or better performance inside 2-3 times a current ARM SoC envelope - I could see that happening.  The E4400's TDP is 65W.  The Atom S-160 max TDP is 8.5W and performs ~ 90% of the E4400 levels.  I haven't found any reference to the power/dissipation requirements of the new 22nm SoC.
3GB of RAM probably doesn't make sense no matter how you slice it, so you'd likely go with 2GB or 4GB.

I think you meant Atom S1260. Not sure where you read that it performs within 90% of a 2GHz Core 2 Duo but I'm confident that there will be few real world situations where that's true. And it'll only even come closest when four threads are maxed out. With two threads on both it'd be lucky to even hit half the performance.

Likewise, I don't see tablet or phone Silvermont SoCs performing at 2GHz Core 2 Duo levels on a core-per-core basis either, unless they're clocking beyond the 2.4GHz number I've seen floating for Bay Trail-T. With 4 maxed threads the story is different since BayTrail-T parts will come in quad-core flavors.

8.5W is actually pretty high for that level of performance for Atom. CloverTrail+ chips probably use only a fraction of that, even with both cores loaded. I'm not actually sure where S1260 is even used..

It is a long distance from that kind of power drain to the existing Pandora though.  I have a 700mA micro-USB charger from an old Blackberry that can push enough DC power into my Pandora to run and charge it - so the whole Pandora runs in a smaller window than 1/10th the last generation of Atom chip's TDP requirement.
You do know 700mA over USB is 3.5W right?

Does anyone know the TDP for the new 22nm atom SoC?
I've seen 2W "SDP" numbers for BayTrail-T. Intel is being kind of dodgy with power numbers lately. But ARM SoC vendors don't tend to give TDP anyway.
 
A 32bit X86 with 2-3 GB (video shared) RAM on a tightly integrated SoC with 2ghz Core 2 duo (E4400) or better performance inside 2-3 times a current ARM SoC envelope - I could see that happening.  The E4400's TDP is 65W.  The Atom S-160 max TDP is 8.5W and performs ~ 90% of the E4400 levels.  I haven't found any reference to the power/dissipation requirements of the new 22nm SoC.
3GB of RAM probably doesn't make sense no matter how you slice it, so you'd likely go with 2GB or 4GB.


I think you meant Atom S1260. Not sure where you read that it performs within 90% of a 2GHz Core 2 Duo but I'm confident that there will be few real world situations where that's true. And it'll only even come closest when four threads are maxed out. With two threads on both it'd be lucky to even hit half the performance.


Likewise, I don't see tablet or phone Silvermont SoCs performing at 2GHz Core 2 Duo levels on a core-per-core basis either, unless they're clocking beyond the 2.4GHz number I've seen floating for Bay Trail-T. With 4 maxed threads the story is different since BayTrail-T parts will come in quad-core flavors.


8.5W is actually pretty high for that level of performance for Atom. CloverTrail+ chips probably use only a fraction of that, even with both cores loaded. I'm not actually sure where S1260 is even used..

It is a long distance from that kind of power drain to the existing Pandora though.  I have a 700mA micro-USB charger from an old Blackberry that can push enough DC power into my Pandora to run and charge it - so the whole Pandora runs in a smaller window than 1/10th the last generation of Atom chip's TDP requirement.
You do know 700mA over USB is 3.5W right?

Does anyone know the TDP for the new 22nm atom SoC?
I've seen 2W "SDP" numbers for BayTrail-T. Intel is being kind of dodgy with power numbers lately. But ARM SoC vendors don't tend to give TDP anyway.
3GB RAM works fine with 3-way interleaving found on some Xeon systems.  See also HP Z800 workstations.  There is precedence for Intel doing that.

You're right on the 3.5W.  I must have slipped a decimal in my brain when I'd posted that the first time.  So instead of 1/10th the power envelope it's 1/3.

I wish there were more knowns than unknowns in these 'soon to be' SoCs.

Somewhere someone is going to make an X86 handheld out of them though - and they're going to completely mess it up like the OQO.

If the device is a Pandora in every other way that counts, is it still a Pandora if there is an X86 inside?
 
in any case, the "P2" will have to compensate for the power draw of the newer, more powerful socs, a 15W battery wont power a s800, t4, exynos 5 octa, i.mx, exynos 4 for more that 4hrs fully pegged.
 
A 32bit X86 with 2-3 GB (video shared) RAM on a tightly integrated SoC with 2ghz Core 2 duo (E4400) or better performance inside 2-3 times a current ARM SoC envelope - I could see that happening.  The E4400's TDP is 65W.  The Atom S-160 max TDP is 8.5W and performs ~ 90% of the E4400 levels.  I haven't found any reference to the power/dissipation requirements of the new 22nm SoC.
3GB of RAM probably doesn't make sense no matter how you slice it, so you'd likely go with 2GB or 4GB.

I think you meant Atom S1260. Not sure where you read that it performs within 90% of a 2GHz Core 2 Duo but I'm confident that there will be few real world situations where that's true. And it'll only even come closest when four threads are maxed out. With two threads on both it'd be lucky to even hit half the performance.


Likewise, I don't see tablet or phone Silvermont SoCs performing at 2GHz Core 2 Duo levels on a core-per-core basis either, unless they're clocking beyond the 2.4GHz number I've seen floating for Bay Trail-T. With 4 maxed threads the story is different since BayTrail-T parts will come in quad-core flavors.


8.5W is actually pretty high for that level of performance for Atom. CloverTrail+ chips probably use only a fraction of that, even with both cores loaded. I'm not actually sure where S1260 is even used..

It is a long distance from that kind of power drain to the existing Pandora though.  I have a 700mA micro-USB charger from an old Blackberry that can push enough DC power into my Pandora to run and charge it - so the whole Pandora runs in a smaller window than 1/10th the last generation of Atom chip's TDP requirement.
You do know 700mA over USB is 3.5W right?

Does anyone know the TDP for the new 22nm atom SoC?
I've seen 2W "SDP" numbers for BayTrail-T. Intel is being kind of dodgy with power numbers lately. But ARM SoC vendors don't tend to give TDP anyway.
3GB RAM works fine with 3-way interleaving found on some Xeon systems.  See also HP Z800 workstations.  There is precedence for Intel doing that.

You're right on the 3.5W.  I must have slipped a decimal in my brain when I'd posted that the first time.  So instead of 1/10th the power envelope it's 1/3.

I wish there were more knowns than unknowns in these 'soon to be' SoCs.

Somewhere someone is going to make an X86 handheld out of them though - and they're going to completely mess it up like the OQO.

If the device is a Pandora in every other way that counts, is it still a Pandora if there is an X86 inside?
to some no, but I believe to the majority, it's the community that makes it what it is...or atleast that is what I hope.

aside:you know what else has a TDP similar to 1/2 - 1/3 of that atom?
 
3GB RAM works fine with 3-way interleaving found on some Xeon systems.  See also HP Z800 workstations.  There is precedence for Intel doing that.
That's true but this isn't a XEON and it won't have socketed memory or a strange bus width - more than likely it'll use PoP memory which won't come in exotic sizes.

Somewhere someone is going to make an X86 handheld out of them though - and they're going to completely mess it up like the OQO.
An x86 handheld with what exaclt? Gaming controls? Perhaps, although the one Oak Trail one announced was never released, and the segment is gaining a bit more traction with the advent of some gaming tablets and Shield. With a keyboard? Probably, since it's happened already. With gaming controls and a keyboard? I'm still waiting for someone outside OPT to make one using anything, let alone x86. I wouldn't hold my breath waiting for someone else to do it.

If the device is a Pandora in every other way that counts, is it still a Pandora if there is an X86 inside?
I would say yes, however I do think it does lose a bit of Pandora-ness not being able to run Pandora programs out of the box.
 
If the device is a Pandora in every other way that counts, is it still a Pandora if there is an X86 inside?
I would say yes, however I do think it does lose a bit of Pandora-ness not being able to run Pandora programs out of the box.
Brings up a funny thought.  If an X86 Pandora 2 were to happen, one of the first things that would need to be built is a Pandora I ARM emulator.  I find that amusing.

Switching to any SoC from where we're at now is going to require some level of re-tweaking of the Pandora's tweaked code.  The X86 case would likely be the most extreme end of that spectrum requiring the most re-work.

Is there any software that we currently have for the Pandora that would not be able to be re-compiled for the X86?

I think there is a fair case for software on X86 that doesn't exist for ARM with non-public source code (Steam Linux games, NWN, etc...)

So - more work to implement.  A layer of difficulty to run existing PNDs.  Opens up software and sources that simply aren't available to ARM.

The whole question is murky.
 
Some of our finest emulators (PCSX ReARMed, DraStic, gpSP) and some libraries (SDL, ALSA) are specifically optimized using ARM NEON asm code. I assume that there's usually a fallback C implementation for all the assembly bits, and in some cases (e.g. SDL) there is x86-optimized asm code already available, but in general, it could be a lot of work to get those emulators running as efficiently on x86. Especially when the emulator is doing dynamic recompilation, a complete rewrite may be needed since things may have to be done in a completely different way if the target architecture is that different. On the other hand it will be much easier to port existing x86 emulators (assuming they're not optimizing those for x86-64). No doubt Exophase can comment more on this.

Anyway, I don't care that much about the architecture, as long as it has good perf, good perf/Watt, and has decent linux support. I don't care about support for closed-source stuff.

In my opinion, the Snapdragon 800 is currently the best candidate SoC.
 
Brings up a funny thought.  If an X86 Pandora 2 were to happen, one of the first things that would need to be built is a Pandora I ARM emulator.  I find that amusing.
That sounds like a bad idea. In few cases would that be worth it performance-wise, and it'd be a terrible crutch for compatibility.. don't give people something that works but is slow/more battery draining from day one. IMO the goal to make it use no more power while running the same kind of software trumps the ability to run the same binaries.

Switching to any SoC from where we're at now is going to require some level of re-tweaking of the Pandora's tweaked code.  The X86 case would likely be the most extreme end of that spectrum requiring the most re-work.
Not really. Any newer ARMv7a processor will run the same code. Some of us have optimized it somewhat for Cortex-A8 specifically but that sort of optimization will tend to run well on better ARM processors.

Is there any software that we currently have for the Pandora that would not be able to be re-compiled for the X86?
Everything I've done has C code but it's less efficient. DraStic has an x86 recompiler too but it's not as good as the ARM one.

In my opinion, the Snapdragon 800 is currently the best candidate SoC.
That may well be true, but it doesn't matter if Qualcomm won't sell it to you and all indications are that there's no way they'll do low volume sales.
 
Last edited by a moderator:
If the device is a Pandora in every other way that counts, is it still a Pandora if there is an X86 inside?
Yes, but even if the battery life stays high an arch change does have significant impact on the software side. I for one don't regard the work that's been done as simply expendable in exchange for what may be a sea of x86 alternatives.
 
If the device is a Pandora in every other way that counts, is it still a Pandora if there is an X86 inside?
Yes, but even if the battery life stays high an arch change does have significant impact on the software side. I for one don't regard the work that's been done as simply expendable in exchange for what may be a sea of x86 alternatives.
thats all debatable, I still hold that many of the packages can be build to run on x86 without much problem.
 
Well according to one of TI's white paper http://www.ti.com/pdfs/wtbu/ti_mid_whitepaper.pdf (Page 6, notes on the bottom) the OMAP 3 is designed to max out at 0.75W, with sleep power of around 7mW. My perspective is that so far in that regime all Intel has had to offer are single core, in order processors with around 1GHz. Them setting up a dual core out of order setup with everything else being equal at 22nm would be impressive.

Them doing that while having core to core and clock to clock performance that beats the Cortex A15 at 28nm, based on Phoronix's tests related to the D525 vs Cortex A15 being heavily in the Cortex A15's favor in those terms, and managing to have better performance per watt if paired with a Cortex A7 is a stretch. We'll see once Silvermont actually launches, but Haswell ended up over the indicated TDP budget and Intel's being even more dodgy.

There's also the issue that an Intel solution will tend to cost more, and not offer as much flexibility with selecting what suits a project like this versus the ARM sphere. Given the only upside to going with Intel would be a larger Wine compatibility library, and even Neverwinter Nights 1 is liable to be something of a stretch given where the clock rate would be liable to land, I'm not sure I see the value.

Now yes you can go active cooling like the Shield and do away with all notions of pocketability, battery life, and just about everything else that drew people to this project, but I fail to see the point particularly as that would involve making a product that's already having issues with being too expensive more so.
 
Last edited by a moderator:
Well according to one of TI's white paper http://www.ti.com/pdfs/wtbu/ti_mid_whitepaper.pdf (Page 6, notes on the bottom) the OMAP 3 is designed to max out at 0.75W, with sleep power of around 7mW. My perspective is that so far in that regime all Intel has had to offer are single core, in order processors with around 1GHz. Them setting up a dual core out of order setup with everything else being equal at 22nm would be impressive.

Them doing that while having core to core and clock to clock performance that beats the Cortex A15 at 28nm, based on Phoronix's tests related to the D525 vs Cortex A15 being heavily in the Cortex A15's favor in those terms, and managing to have better performance per watt if paired with a Cortex A7 is a stretch. We'll see once Silvermont actually launches, but Haswell ended up over the indicated TDP budget and Intel's being even more dodgy.

There's also the issue that an Intel solution will tend to cost more, and not offer as much flexibility with selecting what suits a project like this versus the ARM sphere. Given the only upside to going with Intel would be a larger Wine compatibility library, and even Neverwinter Nights 1 is liable to be something of a stretch given where the clock rate would be liable to land, I'm not sure I see the value.

Now yes you can go active cooling like the Shield and do away with all notions of pocketability, battery life, and just about everything else that drew people to this project, but I fail to see the point particularly as that would involve making a product that's already having issues with being too expensive more so.
x86 offers much, much greater software potential, the only people that can take advantage of the opensource software out there are people who understand linux and who can manage to cross compile for arm...the battery life will be compromised with any of the available options, be it snapdragon, exynos, i.mx6, atom, [trollface :p ] temash or some relatively cheap chinese socs.

If one wants 2008 battery life, you have to stick with close to 2008 performance. Also while battery tech hasn't improved much I believe good design can alleviate the shorter battery life...maybe less weight and volume for the case and more to the batter[although easier said than done]

example of this, the iPhone 5. The a6 soc is pretty beefy, dual core swift cores and a tri-core powervr gpu. It has a 5.5Whr battery and it takes ~2hrs to drain it, so with 3x the capacity matching the OP at ~15WHr it would only take 6 hrs to drain, not close to the 8-10 hours the omap3 takes.

also the x86 soln might cost more but the high price of the pandora isn't necessarily due to the components...so even 100-300% mark up on the soc wont drive prices up that much if the production cost arent too high...I would assume.
 
Last edited by a moderator:
^ X86 is really boring for us software nerds, got to take that in account.
 
.the battery life will be compromised with any of the available options,
Are we talking max power consumption here, of power per unit computation?

My impression is that modern ARM SOCs can use a comparable amount of energy to our OMAP3 when doing the same job. I'm more worried by idling power than running-flat-out power.

The faster it runs, the sooner is can go back to sleep (or something)
 
.the battery life will be compromised with any of the available options,
Are we talking max power consumption here, of power per unit computation?

My impression is that modern ARM SOCs can use a comparable amount of energy to our OMAP3 when doing the same job. I'm more worried by idling power than running-flat-out power.

The faster it runs, the sooner is can go back to sleep (or something)
yes, but that would mean running the chip below its rate speeds[undervolting/clocking] which any chip can do really...as for idling atom chips have arm like idle states, while temash has a higher active idle.

Running maxed out like I assume a heavy emulator running on the cpu would start eating the battery much faster than omap3 anyway you slice it.
 
monstercameron:

Your goals are contradictory.  Switching to x86 means all existing OS images and precompiled binary packages are completely and utterly incompatible.  New bounties would need to be issued, accepted, and awarded to begin to simply start to restore the software library.

If one wants 2008 battery life, you have to stick with close to 2008 performance.
To quote ARMH's website regarding the Cortex A7:

"A single Cortex-A7 processor can deliver 5x energy-efficiency, 50% greater performance and is one fifth the size of the ARM Cortex-A8 processor, which powers many of today's most popular smartphones."

Given most of the Pandora's workload is well within the capabilities of the existing processor and a Cortex A7, it and it's low idle states while a larger core is gated off, your statement is poorly founded.  For that matter, my understanding is most of the current testing indicates that there isn't really a processing power problem so much as a code quality problem, within the regime we're talking about at least, which brings up back to the issue of binary package compatibility and bounties you're arguing should be thrown away.

What emulated systems are you thinking would be picked up beyond the current selection that'd justify having an associated BIG core running full out?  Note: assume that the emulator only uses what it needs, rather then trying to go for hundreds of frames per second.

As for price the Shield represents something Pandora will probably never be able to do as a relatively high power point for its time, and even then people were not happy with the 350USD pricepoint, and nVidia is apparently intending to reduce it to 300USD as a result.  Related to that the vendor and support differences in cost are quite significant.
 
Last edited by a moderator:
Back
Top