Able to upgrade not just the speed but manufacturer of CPU?


Because we are talking about using a RISC-V core inside a GPU ? Moreover, Nvidia has spent a lot of money into producing ARM-compatible cores.
 
Ask yourself: What kind of criteria would Nvidia want it to satisfy and how does the architectural philosophy even fit into them? Saving money on license fees doesn't mean you''ll automagically make a profit with it, it has to be competitive as well. The whole "keep it simple and avoid over-architecting" approach makes it useful as a one-fits-all base for custom solutions, but it's far from ready to be used as a general purpose as-is solution. It'll take tons of working hours to turn it into something you can actually sell - not to mention that you'll have to optimize it for your target manufacturing process first, ARM will provide all sorts of pre-made stuff if you buy a license.

Nvidia simply had a few hard to match criteria for their Falcon successor - for example, it had to be tiny (<0.1mm² on 16nm FinFET) but still able to handle the full 64bit address range.
 
well, not sure how much can be gained from promises inside a blog, but:
http://www.adapteva.com/andreas-blog/why-i-will-be-using-the-risc-v-in-my-next-chip/

this is one of the guys behind the parallella boards, so maybe one of their new SBCs will be risc-V...

but, i can also imagine it is fairly difficult to build a chip, even if you have an open-source architecture, so it may be a while.
I really support the idea behind RISC-V and I can't wait to get my hands on a CPU, but the way I see it the major problem is the GPU. Please correct me if I'm wrong, because in this case I'd love to be, but I don't think any current RISC-V chip has graphics support.

It's going to be hard to design a competitive GPU from scratch without major development backing. Perhaps the involved manufacturers will adopt one of the open GPU architectures and improve on it. The only other option I see is to interface with an existing proprietary design.

This is why I only see RISC-V as a solution for non-graphical systems for the moment.
 
I really support the idea behind RISC-V and I can't wait to get my hands on a CPU, but the way I see it the major problem is the GPU. Please correct me if I'm wrong, because in this case I'd love to be, but I don't think any current RISC-V chip has graphics support.

It's going to be hard to design a competitive GPU from scratch without major development backing. Perhaps the involved manufacturers will adopt one of the open GPU architectures and improve on it. The only other option I see is to interface with an existing proprietary design.

This is why I only see RISC-V as a solution for non-graphical systems for the moment.

The actual shader cluster is a relatively small part of an efficient graphics pipeline, stuff like texture samplers/geometry processing/rasterisation etc. all work *much* better with fixed function hardware (or semi-programmable, certainly more 'fixed' than a general purpose CPU core running code implementing the same thing).

Plus, having a large number of independant CPU cores all running the 'same' shader is also a big waste of resources, hence why most graphics architectures group a set of shader units into a wave/cluster (or whatever name they call their implementation) that share stuff like the instruction stream, as it's again *much* more efficient for such workloads.

So while you could make a decent shader ISA based on the RISC-V isa, to make it anywhere close to efficient you'll probably have a hardware design that shared little with the 'general purpose CPU' design.

I believe when people like NVidia say they're likely going to use a RISC-V core, I assume they mean as a secondary 'control' microprocessor, not as the actual unit running graphics shaders.
 
The actual shader cluster is a relatively small part of an efficient graphics pipeline, stuff like texture samplers/geometry processing/rasterisation etc. all work *much* better with fixed function hardware (or semi-programmable, certainly more 'fixed' than a general purpose CPU core running code implementing the same thing).

Plus, having a large number of independant CPU cores all running the 'same' shader is also a big waste of resources, hence why most graphics architectures group a set of shader units into a wave/cluster (or whatever name they call their implementation) that share stuff like the instruction stream, as it's again *much* more efficient for such workloads.

So while you could make a decent shader ISA based on the RISC-V isa, to make it anywhere close to efficient you'll probably have a hardware design that shared little with the 'general purpose CPU' design.

I believe when people like NVidia say they're likely going to use a RISC-V core, I assume they mean as a secondary 'control' microprocessor, not as the actual unit running graphics shaders.
I honestly don't see where you're going with this, are you trying to counter or support my argument that RISC-V needs to be paired with a suitable GPU to make a competitive portable desktop SoC? Also I had no idea NVIDIA has considered RISC-V, thanks for the info.
 
You probably got that one wrong. They only use it for their Falcon successor, and that's what he was getting at - reread the last few pages.
No I meant I didn't know NVIDIA had considered RISC-V for anything. At the time of writing I was pretty tired so I didn't really care and postponed checking where and how until I had gotten some sleep. But thanks for clarifying anyway.
 
I know there are a lot of folks in this community that despise the thought of an x86 processor and using Windows but I think an x86 offering would be a fantastic upgrade option for the Pyra. To my knowledge there has not been a UMPC type of device that has even attempted to be scalable like the Pyra. I think it could be a large boon for the Pyra as a whole and would instantly attract many new folks. Just look at the interest in the GPD Win... it is also a niche device and seems to have been a successful campaign.
 
I know there are a lot of folks in this community that despise the thought of an x86 processor and using Windows but I think an x86 offering would be a fantastic upgrade option for the Pyra. To my knowledge there has not been a UMPC type of device that has even attempted to be scalable like the Pyra. I think it could be a large boon for the Pyra as a whole and would instantly attract many new folks. Just look at the interest in the GPD Win... it is also a niche device and seems to have been a successful campaign.
Unfortunately current x86 SoCs have various problems when it comes to battery life and overheating and in spite of all the marketing promises of those manufacturers in the last years, this hasn't changed.
 
Last edited:
Yes, I'll be interested to see reviews of the GPD Win. It's a shame they had to remove a speaker and add a fan, which might preclude the option of straightforwardly swapping a Pyra ARM board for a Pyra x86 one, but as long as it's still loud enough with only one speaker, doesn't cook itself or drain its battery excessively, I'll be impressed.
 
I honestly don't see where you're going with this, are you trying to counter or support my argument that RISC-V needs to be paired with a suitable GPU to make a competitive portable desktop SoC? Also I had no idea NVIDIA has considered RISC-V, thanks for the info.

I think I internally conflated someone saying 'NVidia using RISC-V' (without the caveat of 'just for their onboard firmware microcontoller') and 'Someone should make a RISC-V chip with a GPU' and ended up with 'Someone should make a GPU out of the RISC-V project', instead of it just being a separate OSS GPU unit in a SoC with a RISC-V cpu.
 
well, at one point there was some intrigue about using the parallella board's Epiphany coprocessors as a GPU, but that's probably naive in how much work it would be...
 
What about whatever CPU the Asus Zenfone 2 has? My boyfriend has one and it just gets warm.
 
Too much emphasis gets placed on how much heat a processor can generate at maximum performance.

A more appropriate measurement might be, "How much work can the processor do when constrained to our thermal envelope?"

If one of the Intel Atoms (as an example) were held within the same thermal envelope that the Pyra uses for the OMAP5, how much more 'work' (insert benchmark here) could it get done?

In a handheld device, computing efficiency (battery and heat per performance unit) becomes very important. The thermal envelope for the Pyra's SoC daughter board is pretty much fixed at this point. Any replacement SoC board would need to be constrained to that thermal profile. How it performs within that constraint is far more important than how it performs on a lab bench with a monster heat sink on it.
 
Unfortunately current x86 SoCs have various problems when it comes to battery life and overheating and in spite of all the marketing promises of those manufacturers in the last years, this hasn't changed.

No argument from me here. The x86 architecture is not power-efficient which has the unfortunate side-effect of heat which makes them even less efficient and so on (vicious cycle). Regardless, there are many x86 chips that can be passively cooled if they are not running in "turbo mode" all the time. Unfortunately many folks expect these chips to run at these higher rates all the time especially while playing games. This is also evident in the GPD Win case and one of the reasons they had to add active cooling... otherwise the chip would automatically throttle itself back to non-turbo speeds when it gets too hot. The other option to mitigate heat issues would be to disable the turbo mode (no one really wants to do this) or just let the chip auto-throttle itself (which will appear the user as a.sudden and 'random' system slowdown; which it is).

I can only hope that a future x86 chip will be more power efficient. Until then an x86 option would need to be in the potato spec range to stay passively cooled.
 
Too much emphasis gets placed on how much heat a processor can generate at maximum performance.

A more appropriate measurement might be, "How much work can the processor do when constrained to our thermal envelope?"

If one of the Intel Atoms (as an example) were held within the same thermal envelope that the Pyra uses for the OMAP5, how much more 'work' (insert benchmark here) could it get done?

In a handheld device, computing efficiency (battery and heat per performance unit) becomes very important. The thermal envelope for the Pyra's SoC daughter board is pretty much fixed at this point. Any replacement SoC board would need to be constrained to that thermal profile. How it performs within that constraint is far more important than how it performs on a lab bench with a monster heat sink on it.

This is an important point and I'm glad to see it being addressed. The relationship between performance and power consumption may not be that obvious, except that more performance needs more power. Some mobile SoCs in the last few years have gone to extremes to provide their highest few clock speed settings, needing to bump up the voltage tremendously. This has a largely quadratic effect on power consumption so it can get pretty dire.

I'd love to see some deep dive power consumption vs performance comparisons between Pyra and GPD Win. If not a performance dive at least clock speed vs power consumption (listing voltage would also be nice). The Win has come under some heat (no pun intended...) for needing to move to active cooling, but I'd be really amazed if its Airmont CPU cores were not significantly more power efficient than Pyra's Cortex-A15s. They were made on a much more advanced manufacturing node, they were more optimized for mobile applications, they have extensive hardware power management, and they have a more conservative design. The x86 part is a negative but probably not a very big one.

The GPU may be a different story, but it's hard to draw comparisons when Pyra's GPU is so far behind. Its SGX543MP2 is similar to what iPad 2 had. Atom x7-8750 scores about 33 FPS in GFXBench 2.7 T-Rex offscreen (http://www.notebookcheck.net/Intel-HD-Graphics-Cherry-Trail-Benchmarks.140902.0.html), while iPad 2 scores only 3.4fps (https://gfxbench.com/compare.jsp?benchmark=gfx40&did1=2623&os1=iOS&api1=gl&hwtype1=GPU&hwname1=Imagination+Technologies+PowerVR+SGX+543MP2+(dual+core)). Even if Pyra's GPU can clock higher I doubt it'll change the difference by that much, especially if it's saddled with less optimized drivers than Apple's. So I have a good feeling that if you restrained x7-8750's GPU to clocks low enough to match Pyra's performance it would no longer need active cooling.
 
So I have a good feeling that if you restrained x7-8750's GPU to clocks low enough to match Pyra's performance it would no longer need active cooling.

I would be more interested to see if it were clocked to mach Pyra's thermal envelope, how much more could it do? Two ways to consider the same perf/watt question.

I agree that the x7-8750 watt per watt should run circles around the OMAP5 if based on nothing else but the 14nm process the 8750 is built on. The GPD Win has set an expectation that the SoC should be expected to run flat out continuously - regardless of the compromises required to get there.

We're likely 2-3+ years out from an upgrade board for the yet to be released Pyra. Looking at alternate SoC options today only leads to pre-buyers-remorse for a hypothetical that doesn't exist. In 2-3 years there will undoubtedly be something faster/cooler to consider than anything available today.
 
Clearly the current Pyra needs to get out there and at least turn enough profit to fund the next iteration. I am hopeful that the modular design will allow the possibility to upgrade the system at a smaller cost and shorter development time. I still think an x86 option in the future would be an good choice that will open many new possibilities for the Pyra. I know this option would not be anytime soon. I am hopeful for more power efficient chips as that is the direction technology is headed.
 
The Win was going to use a 8500 witch likely wouldn't have needed the active cooling. Whatever is the 8350 equivalent in 2-3 years would be a good candidate for a x86 CPU upgrade. At that point there will hopefully be a good RISC based SOC to use instead.
 
Back
Top