Overview of SoC options


Could you explain a little more as to why in your opinion a passively cooled SOC would work fine in a tablet / phone but not in a handheld mini computer?
Should it not be the other way around? "processor in the base" handles heal accumulation better than "processor under the screen". It's stated by intel the only reason they are introducing SDP to their Y series chips is because the screen heats up the SOC in tablets. In another word, tablets are more prone to over heat than foldable computers. However, SDP's just intel's standard, most ARM chip providers are still trying to beef up their max performance (tegra 4 as a prime example, with all that cores and its ugliness) without giving too much attention to their efficiency. It's hard to say at this point if SDP intel socs are going to be any good.

My main concern's backwards competibility. If we are to use that Ti chip, P2 would be a solid product right out of the box, so it would be favoured by journalists a bit more. On the other hand, if we are to ditch backwards competibility, we have to start all over. That being said, OMAP 5's slow donkey LPDDR2 is bothersome. It doesn't look nice on paper.

Don't be that harsh on Notaz, intel's so called marginal advantage has yet been proven; even if it's 20% better, pandora's community is able to keep such a dated machine going strong. Jumping on that wagon may not be that cool after all.
 
Last edited by a moderator:
That being said, OMAP 5's slow donkey LPDDR2 is bothersome. It doesn't look nice on paper.
Still easily 4 times faster than what's in your current Pandora. sound enticing IMHO. But please dont not use that speed increase just to swap the double-buffering for the screen because there is 4 time more pixels to blitz. That would put us back where we are...
 
...

Don't be that harsh on Notaz

...
Not sure what you mean by this, perhaps something has been 'lost in translation', my reply to Notaz was in no way intended to be harsh, quite the opposite in fact. @ Notaz : Please accept my apologies if my reply comes across as harsh to you.
 
Not sure what you mean by this, perhaps something has been 'lost in translation', my reply to Notaz was in no way intended to be harsh, quite the opposite in fact. @ Notaz : Please accept my apologies if my reply comes across as harsh to you.
I did misunderstand you. However, the way you suggested people are leaning towards ARM is mostly due to Notaz's zealotry for ARM; that sounds like an accusation.

You were trying to communicate too much information at once without seperating what are the facts, what's your view on this subject matter, and what is your wishful thinking.

A better way to put your thoughts on paper would be: Notaz, exophase and others' talent are best served on an ARM platform, even if intel's socs were somehow better, our community could make up more than that. Notaz should not feel obligated towards us, instead, he should keep on doing what he enjoy the most, and what he's best at.
 
Last edited by a moderator:
...

However, the way you suggested people are leaning towards ARM is mostly due to Notaz's zealotry for ARM

...
I do not see the stated fact that Notaz prefers working with ARM over x86 as an accusation of zealotry in any way, shape, or form. It was not my intention to imply this. As clearly stated, what someone does with their free time is of nobodies concern but their own.
 
I do not see the stated fact that Notaz prefers working with ARM over x86 as an accusation of zealotry in any way, shape, or form. It was not my intention to imply this. As clearly stated, what someone does with their free time is of nobodies concern but their own.
I'm not trying to prove you wrong or anything like that. But the way you stated that feels more like "we can do this without you" than "spending your free time for us is deeply appreciated, but I don't want you to feel like dragged down by us". It all sounds very negative and has a sarcastic undertone.

I get your point, but you could phrase it better. Notaz's tough as nails, unlike me; most likely he would not mind it. My apologies if my comments leave a bad taste in your mouth, I did not mean it. :(   So forgive me.
 
No worries xiongxioi, no offence taken whatsoever. I'm just a little worried that an obviously intelligent person such as yourself could so badly misinterpret what I wrote, I assure you there was no sarcasm whatsoever, everything that was written was meant to be interpreted exactly as it was written.

Guess we need to hear from the man himself though :)

@ Shenmue, apologies for the derail, good call, I'm just rather mortified at the thought I was in any way dissing Notaz and his quite simply amazing contributions to the Pandora project.
 
Last edited by a moderator:
Good thread. It would be interesting to also write power consumpution data (at least whatever is known).

I have no problem with TI chip TODAY. But if P2 will be powered by OMAP5, Ed should hurry to make it to the streets, or the machine will get outdated very soon (just as happened with Pandora due to the huge delays).

Too bad we have no access to Snapdragon 800, it would make a killer P2.
 
No worries xiongxioi, no offence taken whatsoever.
Loon, I'm so glad we have such a great community, a halfeaten cookie for you.

I wonder are there any way to increase Pandora's exposure?

If the situation goes on like this, then we would have to stay with either small niche chips, things from an new comer that may or may not know what it's doing or severly dated junks. It would be really good to have more options.

Surely we can say goodbye to linux and go full android or windows, but that would be total suicidal.

Can anyone give a reasonable analysis on the drawbacks of not having a FOSS gpu?

It would also be appreciated if ED can come out and give us some sort of time frames on when is a good time to start P2. (Should we wait until all that Craigix madness blown over? Or Does ED need more money to kickstart it?)
 
Can anyone give a reasonable analysis on the drawbacks of not having a FOSS gpu?
The main drawbacks are of not having a GPU with FOSS drivers are:

  • if there are bugs in the driver, they typically cannot get fixed (so you better hope that the closed blob is bug-free)
  • you typically only get support for one specific version of X and Linux, so it may be the case that we have to keep using old versions of other stuff just to keep the GPU driver working.
  • it's hard to use the hardware to its full potential if there's a black box closed blob between you and the hardware
  • if anyone would like to try to improve the driver (e.g. to make it faster or to add features), that is impossible and/or illegal with a closed-source driver
  • the device becomes less attractive for FOSS sympathisers and FOSS purists - which is imho an important segment (maybe the most important segment) of our current and potential future community
 
Of the options listed, I think the Bay Trail is a great possibility.  It's as if Intel looked at the Pandora and built the perfect chip for a successor, just to see if we noticed.  They gave us everything we ever wanted in an SoC (bleeding edge, solid graphics, FOSS, 64bit, low power consumption, high performance per watt, cheap to buy @$37ea) except that it is X86.  Could they move us off of ARM?

No, I don't think that Intel cared a bit about the Pandora or it's community.  But wow - they pretty much nailed the need.
...  Perfect huh?  Let's analyze this claim.

The density of Aluminum is 2.7 grams per cubic centimeter.  A single cubic center worth of Aluminum would actually represent a rather significant heat spreader for a compact & passive system like we're talking about.

The specific heat of aluminum is 0.91 J/g*degrees Celsius.  If we assume an initial temperature to be a comfortable 24 degrees Celsius and a final temperature a uniform blister 70 degrees Celsius, then that requires 113 Joules per cubic centimeter of Aluminum.  Ignoring cooling options it would take 57 seconds at Intel's 2W SDP to heat the heat spreader to this temperature.

So let's account for cooling making some generous assumptions.  Let's assume that this cubic centimeter of Aluminum is setup in a thin 5cm flat square, with only the outward surface radiating heat in a meaningful way, and it is able to radiate heat perfectly as a black body.  Let's also ignore efficiency loss due to re-radiation from the environment for the moment.  Under these unrealistic idealized conditions the heat spreader would stabilize at around 72 degrees Celsius with 2 Watts.

In reality it would stabilize much higher due to the local environment becoming a heat trap resulting in massive efficiency loss to reradiation, nevermind the heat sink isn't a idealized blackbody and the area is on the excessively improbable side as a heat spreader, rather then an actively convection cooled finned heat sink ala the Shield.  Naturally an enclosed and effectively sealed box doesn't lend itself to convection, and case conduction isn't something you want to be betting on in the constraints of the normal thermal envelope, rather then its extremes as SDP is not the TDP.

In short the Z3770 should cook itself in around a minute in a Pandora form factor, and is almost but not quite perfectly unsuited.  Bay Trail-T is designed for tablets, not handhelds.  If you want to advocate Intel the Merrifield platform is what you should be talking about, which is set to arrive in 2014 and thus Intel's not currently in a position to really offer to anyone.
You seem to have a handle on the thermodynamics involved. Please completely ignore the X86 vs ARM thing for the following:

Most of the SoC options being discussed run in a @2W TDP range.  This is a topic that has to be addressed regardless of the SoC chosen.

How would the above change if we were to

1.  Use the existing Pandora's overall dimensions as a starting point.  140.3mm X 83.5mm X 29.25mm

2.  Use two 24ga (0.51mm thickness) sheets of aluminum as a heat spreader (they join to form one sheet at the radiator in this exampe).

3.  Recess it from the left, right and back edges by 2mm.  The spreader itself is then 136mm X 77mm X 1 mm for a volume of @ >10cc.

4.  Bring it out the front then fold it into a corrugated radiator recessed 15mm in from both the left and right leading edge.  Front facing radiator dimensions 110mm wide, 20mm tall.  Corrugations 2mm tall, 1mm wide, with 1mm spacing yields 10 'fins' and a total metal to air surface area of 10*2*2 (two sided fins) + 9 (back space between the fins) = 49 mm unfolded surface height.  110mm width = 49*110= 5,390mm^2 radiant metal to ambient air.

This part is a bit hard to explain.  Picture a rectangular sheet of .5mm thick aluminum.  Corrugate the center.  Fold the outer edges back to back to form a single flat double thickness with a dual-connected radiator (corrugations).

We can't assume the radiator is 100% efficient.  So, 75% maybe?

How many watts of system heat could that type of spreader with out the front finned horizontal radiator dissipate?
 
Can anyone give a reasonable analysis on the drawbacks of not having a FOSS gpu?
 
The main drawbacks are of not having a GPU with FOSS drivers are:

  • if there are bugs in the driver, they typically cannot get fixed (so you better hope that the closed blob is bug-free)
  • it's hard to use the hardware to its full potential if there's a black box closed blob between you and the hardware
  • if anyone would like to try to improve the driver (e.g. to make it faster or to add features), that is impossible and/or illegal with a closed-source driver
I don't really agree with that, those drivers are extremely complicated, so complicated that you can't really fix anything there. You also won't have the documentation, or what you have will be incomplete. Even for our current SGX, the open kernel part is not insignificant, they have MMU, power, queue management code there, this is enough to fix good chunk of problems SGX drivers on pandora currently have. Yet this is not going to happen, it's just too difficult to make any sense of that code.


The only real benefits are the ones I've left out in above quote, that is compatibility with future Linux graphics stacks and pleasing FOSS lovers with illusion that they have code they can actually do something with (except compiling it).
 
Last edited by a moderator:
Can anyone give a reasonable analysis on the drawbacks of not having a FOSS gpu?
 
The main drawbacks are of not having a GPU with FOSS drivers are:

  • if there are bugs in the driver, they typically cannot get fixed (so you better hope that the closed blob is bug-free)
  • it's hard to use the hardware to its full potential if there's a black box closed blob between you and the hardware
  • if anyone would like to try to improve the driver (e.g. to make it faster or to add features), that is impossible and/or illegal with a closed-source driver
I don't really agree with that, those drivers are extremely complicated, so complicated that you can't really fix anything there. You also won't have the documentation, or what you have will be incomplete. Even for our current SGX, the open kernel part is not insignificant, they have MMU, power, queue management code there, this is enough to fix good chunk of problems SGX drivers on pandora currently have. Yet this is not going to happen, it's just too difficult to make any sense of that code.


The only real benefits are the ones I've left out in above quote, that is compatibility with future Linux graphics stacks and pleasing FOSS lovers with illusion that they have code they can actually do something with (except compiling it).
Are all GPU drivers so complicated that nobody can hope to understand them anyway? Or does it depend? I would assume that well-designed GPUs with well-documented drivers are something completely different than poorly designed GPUs with badly coded drivers, even if that bad code is made available.

Anyway, the future-proofness argument is the important one to me.
 
Very nice thread :)

I think we are forgetting Mediatek cpu (don't know the specs..)
 
I think Intel's drivers actually have pretty good support. The documentation is fairly extensive: https://01.org/linuxgraphics/documentation/driver-documentation-prms And the drivers do get commits from people who don't work at Intel, you can get an idea from the e-mail addresses.. http://cgit.freedesktop.org/mesa/mesa/log/ None of the last 10 people to commit have Intel e-mail addresses. One of them is from AMD though!
That gave me pause for another consideration...

Since the Intel part (Z3770) is just one SoC of a larger processor family with a common graphics core and driver base, we're no longer talking about the Pandora community being the only ones running full-Linux on the SoC family we're using.  

Wouldn't it then be part of a larger ecosystem of Linux-hacked Bay Trail tablets and Merrifield phones and Silvermont based desktop computers all leveraging what is essentially the same Intel HD graphics driver FOSS code base?
 
That gave me pause for another consideration...

Since the Intel part (Z3770) is just one SoC of a larger processor family with a common graphics core and driver base, we're no longer talking about the Pandora community being the only ones running full-Linux on the SoC family we're using.  

Wouldn't it then be part of a larger ecosystem of Linux-hacked Bay Trail tablets and Merrifield phones and Silvermont based desktop computers all leveraging what is essentially the same Intel HD graphics driver FOSS code base?
What you're saying is only kind of sort of true. If you look at the last generation of 32nm Atom parts using their Saltwell CPU core you got distinctly different dies with CedarTrail, Medfield, CloverTrail, and CloverTrail+. Only one of those, CedarTrail, was ever intended to be used in actual PCs. The others are different enough that it's not enough to expect that support for one would automatically translate into good support for another. And I hesitate to even conclude CedarTrail got good Linux support - it was released when netbook sales were undergoing vast decline (in fact Intel even said that they postponed its release due to lack of interest) and so the number of people even interested in running Linux on it wouldn't be that great. The GPU was probably never really supported, but that was an IMG one.

When it comes to Linux support of this stuff it's not the CPU core that really matters but all the peripherals and stuff. And with different dies those things change at least somewhat. Anything that's hacked doesn't consist of an "ecosystem" and I don't know why you'd think there'll be people running Linux on Bay Trail tablets or Merrifield phones.

In the case of BayTrail, chances are good that the netbook, laptop, and desktop variants, called BayTrail-M and BayTrail-D, are different SKUs of the same BayTrail-T die that Z3770 uses. Unlike Merrifield, which will use a PowerVR Rogue GPU. And no matter what, BayTrail-T's GPU should be compatible with Intel Gen 7 drivers used on Ivy Bridge and Haswell because it's the same GPU architecture (as far as we know).
 
Mmm - not looking at the previous generations of Atom processors at all.  I'm looking at just within the Silvermont architecture.

http://en.wikipedia.org/wiki/Silvermont

All of the Silvermont SoCs except embedded and automotive parts have the same Intel HD Graphics (4 EU) built in.

Intel has split Silvermont into several product lines:

Merrifield for smartphones.

Bay Trail for tablets, netbooks and hybrid devices.

Avoton for micro servers and storage devices (NAS?).

They're listed as Atom, Pentium and Celleron parts, but aren't they all essentially different binning and appliations of the same 22nm Silvermont SoC/CPU with the same Intel HD graphics?

http://en.wikipedia.org/wiki/Intel_HD_Graphics

There are people installing and running Linux on nearly every computing device made in the last 10 years.  You can even put Ubuntu on a phone these days.  Why wouldn't there be people installing Linux on every device that a Silvermont based SoC gets built into?

I must be over-simplifying it far too much.  I figured that "Silvermont with Intel HD graphics" was the level that the graphics drivers would be written for - not a device by device level.  Is it really that differentiated?
 
Last edited by a moderator:
I used the old generation to illustrate my point and the precedent with Intel. I'm trying to say, Merrifield and BayTrail (and Avoton) are totally different chips, they're not just different binning. There's no such thing as a Silvermont SoC. I don't think it's really known if BayTrail-M and BayTrail-D are the same or not. You'd assume so based on the names, but by that reasoning you might think CloverTrail and CloverTrail+ are the same chip too, but they aren't.

You can chroot jail something like Ubuntu on an Android phone. But that's not totally the same as running it natively like you would on Pandora. In the old days putting Linux on just about anything was easier because devices weren't as complex and unknown. Do you actually know anyone who has put Linux on a Medfield phone, natively? If not don't assume it'll be true with every Silvermont device.

And yes "Silvermont CPU with Intel HD graphics" isn't true for every SoC with Silvermont in it. It's not true for Merrifield.
 
Back
Top