possible soc updates.


Are there any devices available for purchase that use CT+ as the SOC?

I can't find any.

I'm also highly sceptical of Intel benchmarks & studies due to past sins but I'm not sure how any real judgements relating to battery life / CPU / GPU power can be made until CT+ equipped hardware is available for testing & independent review.
 
Are there any devices available for purchase that use CT+ as the SOC?

I can't find any.

I'm also highly sceptical of Intel benchmarks & studies due to past sins but I'm not sure how any real judgements relating to battery life / CPU / GPU power can be made until CT+ equipped hardware is available for testing & independent review.
Lenovo K900 and Galaxy Tab 3 10.1 are the most prominent, along with a few other devices.
 
To the extent the ABI "study" did evaluate things it has been reviewed and found to be nonsense.  Just like Microsoft's similar drive-by study on the costs of Munich's adoption of GNU/Linux.  Even if "studies" that are shy about revealing its raw data and methods and serve no purpose beyond drive by journalism headlines weren't garbage as a rule, that there's the issue of binning which is part of why Intel's ULV processors cost so much.

Clovertrail+ covers the Z2520 (Turbo 1.2GHz), Z2560 (Turbo 1.6GHz), & Z2580 (Turbo 2GHz) which based on the specs likely plays out to the average, better, and best bins.  Ergo if there had been anything to the "study" it'd be more a statement on binning then anything else.  Given Intel has had production issues with Saltwell Atom processors despite using a mature node to address this space, the situation is liable to be even be more extreme related to Intel's demands on the fab side of the business.

What we do know is that clock to clock and core to core the Cortex A15 in general simply crushes Saltwell.  Best case scenario would be to focus on the highs, where Silvermont might be able to swing decent perf/W with a race to deep sleep state strategy like Haswell was designed to go for trying to keep power in check.  However I seriously doubt they can beat the Cortex A7/53's optimizations in terms of idles and lows, and there's a good bit of room before you hit the mids with the Cortex A15/57 being stuttered on for Race to Execute.  I really don't see Intel winning in terms of idles, lows, or mids which is what dictates normal battery life.  Highs ala gaming battery life might be more competitive with Silvermont, but let's be honest here.  Intel's relative position with ARM has been getting worse, not better and ARM is the one dictating the terms of the game. 

Remember that Intel became Intel as much as anything due to becoming the "consumer" brand and ISA, and thus easy access to software.  If you're talking about iOS (Objective-C/C++) and high performance Android NDK games (C++) they're compiled to ARM binaries much like a lot of important OpenPandora software.  ARM has become the "consumer" brand of the tablet and smart phone market with more chip movement then the PC "consumer" brand.  The ARM ISA also has some niceties that lend itself more to compiler optimizations of things like the superscalar engine, and you can't exactly run the normal Wintel application library effectively on those platforms.
 
Last edited by a moderator:
Reading a few reviews for the Lenovo K900 and Galaxy Tab 3 10.1 makes me think that an x86 based SOC is certainly worthy of serious consideration for P2. Especially as it is proving very difficult to source a current gen ARM SOC for P2. Intel may of course have zero interest in providing SOCs for such a niche device, or they may be looking for anyone to fly the flag for x86 SOCs and would fall over backwards to provide help and support for a cool 'indie' project like P2.

It certainly seems to me that the benefits in terms of software availability / range of OS options of an x86 SOC would outweigh the loss of backwards compatibility with P1, others may of course feel very different about this.
 
os options maybe be limited by pvr gpu ip but baytrail may be different.
Interesting, here was me thinking that Intel would offer decent driver support for Linux & Windows. Can't really see the logic of them not doing so as from my understanding Clovertrail+ would be a valid SOC choice for a netbook.
 
Interesting, here was me thinking that Intel would offer decent driver support for Linux & Windows. Can't really see the logic of them not doing so as from my understanding Clovertrail+ would be a valid SOC choice for a netbook.
They don't make the GPU in CloverTrail+ but license it from PowerVR. I doubt IMG would let them release an open source driver, especially if Intel played little role in developing it.

As for providing a binary blob driver for Linux I guess there may not be enough incentive to do so. AFAIK CT+ doesn't support Windows at all so that's completely out.
 
Last edited by a moderator:
How about we design the P2 around the Tegra 3 and chances are the T4 won't be too different? 
 
The tegra 3 is bad, and yes, the 4 isnt any better. Much like the 2 was lackluster, the 4 brings in problems of its own.

Being limited to battery power on a finite resolution screen pretty much puts the vector at something else than cpu/gpu performance. Pushing a limited set of pixels on live-rendered graphics is today achievable regardless. And you wont get the power to do any more emulation targets anyway.

Free drivers is more important. Because that will be paramount down the road. When the difference in 15% is negligible one will soon realize the difference in freedom lies there. Too many consoles were all too powerful on hw and sucked because they were consoles and had console mindset. Lets be innovative and improve with software instead.

In that regard. es2gears on lima mesa driver http://www.youtube.com/watch?v=4WOILEYAxWE
 
FOSS drivers are definitely something I would like to have, however I believe you belittle the amount of extra computing power (and therefore ability to strive to better emulation targets and more powerful homebrew) we can get from stronger chips. I don't think we should sacrifice either blindly because one is more important than the other. Instead, when we actually know our options, we should weigh the pros and cons of what each one allows us. (FOSS drivers obviously being a pro just like more power is a pro) For example, when the Pandora was being developed, we had the option of the OMAP or an i.MX chip, and we chose the OMAP because it gave us the extra power which would open up the possibility of playable N64 emulation (which we got) and fullspeed PSX emulation.

-God Ginrai
 
The above video shows the Exynos 5410, which uses a PowerVR GPU. The CPU is not really tested here, and I don't know to what extent RAM is tested. But I guess it's safe to assume that the GPU is stressed with this benchmark.

So now we know that the PowerVR SGX544MP3 uses about 2 Watt when in full use, and about 150 mW when idle. We don't learn anything about the CPU from this video because it's not tested.

For comparison: the current Pandora uses about 100 mW for the entire unit when idle, and about 2 Watt for the entire unit when in full use.

So in terms of power consumption, this GPU is eating too much for my taste. Also it only has a binary blob driver. In terms of performance: as far as I can tell (and I'm no expert by far), Mali beats PowerVR. According to Wikipedia, the Mali-T628 MP6 which is in the Exynos 5420 does 115.2 GFLOPS, while the PowerVR SGX544MP3 can do only 51.8 GFLOPS. I have no idea about Mali's power consumption though.
 
Adreno quad 320 is the fastest GPU and exynos has the fastest CPU. Its not like the difference makes a world of difference. Only reason i didnt buy an XU board was the powerVR graphics.
 
Last edited by a moderator:
The above video shows the Exynos 5410, which uses a PowerVR GPU. The CPU is not really tested here, and I don't know to what extent RAM is tested. But I guess it's safe to assume that the GPU is stressed with this benchmark.

So now we know that the PowerVR SGX544MP3 uses about 2 Watt when in full use, and about 150 mW when idle. We don't learn anything about the CPU from this video because it's not tested.

For comparison: the current Pandora uses about 100 mW for the entire unit when idle, and about 2 Watt for the entire unit when in full use.

So in terms of power consumption, this GPU is eating too much for my taste. Also it only has a binary blob driver. In terms of performance: as far as I can tell (and I'm no expert by far), Mali beats PowerVR. According to Wikipedia, the Mali-T628 MP6 which is in the Exynos 5420 does 115.2 GFLOPS, while the PowerVR SGX544MP3 can do only 51.8 GFLOPS. I have no idea about Mali's power consumption though.
It's possible the GPU is being used for compositing, which would skew the comparison since this is driving what's most likely a much higher resolution display than Pandora's. You can see around 1:03 when he stops moving the mouse around that the GPU consumption drops to 0.08W and may have kept falling.
 
But they aren't measuring before the PSU, thats like a crank measurement for the engine and not at the wheels if you are trying to figure out if the car is good. Obviously we are staring blind at numbers here and not asking what the use case is.There are a few components left out. Also,  they are calling it Voltage, Ampere, Power, when it should be Voltage, Current, Power.

Also, you can bench any number up against each other, and it doesn't tell the whole story. You have to test it, and this synthetic bench (thats probably optimized) without numbers even, tells me almost nothing. It is what could be the TDP of the GPU, were it to run at full, given a semi-decent video presentation of graphics flowing on a screen...

The thermal envelope on most SoCs dont allow for continuous use of CPU. If you tax all cores at once over time you end up needing active cooling (which that system has). In turn drawing more power. The handheld equivalent would be the nvidia sheild.

If you dont have hardware acceleration of video on the GPU, it doesnt matter if you have a power efficient CPU on the gaming you do. You will have used up the advantage in battery life on watching video.
 
Last edited by a moderator:
If we wait long enough we can get an x86 SoC that can run ARM stuff as well and is more efficient in every respect. :p or we could wait until they make isolinear chips so we can store a few teraquads of emulators on the OP2.
 
If we wait long enough we can get an x86 SoC that can run ARM stuff as well and is more efficient in every respect. :p or we could wait until they make isolinear chips so we can store a few teraquads of emulators on the OP2.
No one's announced any plans to make a CPU that natively does both x86 and ARM. Intel would need a truly humiliating lead over their competitors to offer an x86 chip which emulates ARM better than others run it natively. Initial reports of Silvermont look impressive but not that impressive. They could widen the gap some with 14nm but I don't think it's going to be anything like that - they're going to need over 2x the peak single threaded performance at the same power consumption to pull off what you're saying.
 
Last edited by a moderator:
Back
Top