Any chance on talking to anyone at Valve about the Pyra?


In any case... to my knowledge SteamOS is just a vanilla Debian distribution with a custom installer for Steam/Linux. BayTrail/CherryTrail with a fitting EFI / coreboot implementation shouldn't have a problem running it.
 
But size matters quite in most of the cases
Size of what? What are you on about this time?
In portable devices, which cant afford "dual CPU" if they ever need it.

Well, Loongson got a limited x86 command translation unit (supports ~200 commands including x87, MMX, SSE1 and SSE2) that is supposed to offer about 70% of the native performance when using QEMU. However, from the little bit of what I've heard about it, it wasn't really that awesome.
Exactly what i meant.
 
But size matters quite in most of the cases
Size of what? What are you on about this time?
In portable devices, which cant afford "dual CPU" if they ever need it.

Well, Loongson got a limited x86 command translation unit (supports ~200 commands including x87, MMX, SSE1 and SSE2) that is supposed to offer about 70% of the native performance when using QEMU. However, from the little bit of what I've heard about it, it wasn't really that awesome.
Exactly what i meant.
Only if you could explain yourself better the first time you post things.
 
But size matters quite in most of the cases
Size of what? What are you on about this time?
In portable devices, which cant afford "dual CPU" if they ever need it.

Well, Loongson got a limited x86 command translation unit (supports ~200 commands including x87, MMX, SSE1 and SSE2) that is supposed to offer about 70% of the native performance when using QEMU. However, from the little bit of what I've heard about it, it wasn't really that awesome.
Exactly what i meant.
Only if you could explain yourself better the first time you post things.
Yeah, i find that i didnt speak to people so much, and got so used to the "RAW" format i got there in my mind... i started to forget languages.
 
In portable devices, which cant afford "dual CPU" if they ever need it.
Most SoCs these days have two CPUs, many (especially the latest) have four, some even have eight. For all the junk in a SoC, the CPU probably takes up very little space on the die. I am 100% certain that adding some alternate processor to produce a homogeneous system would add negligible space.
 
In portable devices, which cant afford "dual CPU" if they ever need it.
Most SoCs these days have two CPUs, many (especially the latest) have four, some even have eight. For all the junk in a SoC, the CPU probably takes up very little space on the die. I am 100% certain that adding some alternate processor to produce a homogeneous system would add negligible space.
I think you are mistaking "dual core, quadricore and eight cores" SOCs with having 2, 4, and 8 CPUs. Cores and CPU are not the same thing at all, and a 8 core processor does not take much more space that a single core processor. It's not like putting 8 CPUs on the die. 
 
That's true, fair enough, multi-core is not the same as multi-CPU. I shouldn't have drawn that comparison, sorry.

With that in mind, even ignoring multi-core, SoCs are still multi-CPU. Even the Pandora technically has 4 different "CPUs" in it, although only two of them are general purpose and may not count. The OMAP5 has two obvious distinct user accessible CPUs, what if the M4s were "simply" replaced by an x86? Obviously not simple, but you get my meaning. It'd take up more space on the die as the x86 is more complex than an M4 but I'm not sure the size difference would be substantial enough to warrant as a killer argument against using it, at least not when compared to the alternative of designing an ARM CPU with many x86 "emulation" instructions.
 
That's true, fair enough, multi-core is not the same as multi-CPU. I shouldn't have drawn that comparison, sorry.


With that in mind, even ignoring multi-core, SoCs are still multi-CPU. Even the Pandora technically has 4 different "CPUs" in it, although only two of them are general purpose and may not count. The OMAP5 has two obvious distinct user accessible CPUs, what if the M4s were "simply" replaced by an x86? Obviously not simple, but you get my meaning. It'd take up more space on the die as the x86 is more complex than an M4 but I'm not sure the size difference would be substantial enough to warrant as a killer argument against using it, at least not when compared to the alternative of designing an ARM CPU with many x86 "emulation" instructions.
Unlikely that TI can be convinced to put an x86 compatible cpu ip  (which they don't have ready, for which they don't have a license, for which is hardly any usage case except emulation) into their OMAP line. But you could stuff 2 different SoC on the board. Without any coherency you need a full complement of memory and flash for the 2nd. Just some home-grewn interconnect would be necessary to be devised. Possibly here: http://cdn.opencores.org/downloads/soc_bus_comparison.pdf  (an APB-Bus implementation could be possible, page 9)

For offloading emulation an FPGA would be the most logical choice as its generally reconfigurable to whatever needs your current emulator has. 
 
Last edited by a moderator:
Unlikely that TI can be convinced to put an x86 compatible cpu ip
But it's somehow more likely to convince them to design an entirely new CPU based on ARM that also has x86 emulation instructions?
 
In portable devices, which cant afford "dual CPU" if they ever need it.
Most SoCs these days have two CPUs, many (especially the latest) have four, some even have eight. For all the junk in a SoC, the CPU probably takes up very little space on the die. I am 100% certain that adding some alternate processor to produce a homogeneous system would add negligible space.
About the space, i meant that placing 2 dies, and especially power and etc to make em work will take MUCH space.So all this should be integrated into a single SoC.

That's true, fair enough, multi-core is not the same as multi-CPU. I shouldn't have drawn that comparison, sorry.

With that in mind, even ignoring multi-core, SoCs are still multi-CPU. Even the Pandora technically has 4 different "CPUs" in it, although only two of them are general purpose and may not count. The OMAP5 has two obvious distinct user accessible CPUs, what if the M4s were "simply" replaced by an x86? Obviously not simple, but you get my meaning. It'd take up more space on the die as the x86 is more complex than an M4 but I'm not sure the size difference would be substantial enough to warrant as a killer argument against using it, at least not when compared to the alternative of designing an ARM CPU with many x86 "emulation" instructions.
The mad PHP machine says i cant like more post today, so i came to say that to you.
That's true, fair enough, multi-core is not the same as multi-CPU. I shouldn't have drawn that comparison, sorry.

With that in mind, even ignoring multi-core, SoCs are still multi-CPU. Even the Pandora technically has 4 different "CPUs" in it, although only two of them are general purpose and may not count. The OMAP5 has two obvious distinct user accessible CPUs, what if the M4s were "simply" replaced by an x86? Obviously not simple, but you get my meaning. It'd take up more space on the die as the x86 is more complex than an M4 but I'm not sure the size difference would be substantial enough to warrant as a killer argument against using it, at least not when compared to the alternative of designing an ARM CPU with many x86 "emulation" instructions.
Unlikely that TI can be convinced to put an x86 compatible cpu ip  (which they don't have ready, for which they don't have a license, for which is hardly any usage case except emulation) into their OMAP line. But you could stuff 2 different SoC on the board. Without any coherency you need a full complement of memory and flash for the 2nd. Just some home-grewn interconnect would be necessary to be devised. Possibly here: http://cdn.opencores.org/downloads/soc_bus_comparison.pdf  (an APB-Bus implementation could be possible, page 9)
For offloading emulation an FPGA would be the most logical choice as its generally reconfigurable to whatever needs your current emulator has.
x86 isnt that hard to develop.x86 patent is over 20 years old, and is invalid in most countries already. Licencing? (It still can run 16bit in native, and supports only 1MB of ram on start... i wanna run to MIPS :p )

x86 emulation is a HEAVY deal if they dont wanna lose the entire ARM Server market to AMD.

And no, no one expects it to be embedded into a line meant for portable devices.

But when it goes to heavy gaming, or emulation of games, or servers, HW-assisted x86 emulation is a serious advantage.

Also, it may help developers in porting their heavy applications without pain in the ass. I wonder if one day we will see KolibriOS utilizing this feature on ARM.

I said my opinion on putting "more than a single SoC" in portable devices. Its just a no in the devices similar to pyras expected size.

But, i notice that Intel Galileo may give us some chances, but its not always usable. Maybe you can place it in pyra, but not much else.

Offloading x86 to FPGA is nice, but doesnt feel enough. A proper facility, to properly execute both x86 and ARM code same time should be implemented.

This may give us a "Multi-Arch" in the near future. A cpu which practically executes both codes natively, even if it wasnt 100% optimized for the second one.

A cross-cpu... *to hear the part where i just purely go crazy about the idea, please insert a euro into the screen*

ZetaNeta got +9000 to memories about what Transmeta been doing...

Well, Loongson got a limited x86 command translation unit (supports ~200 commands including x87, MMX, SSE1 and SSE2) that is supposed to offer about 70% of the native performance when using QEMU. However, from the little bit of what I've heard about it, it wasn't really that awesome.
 
About the space, i meant that placing 2 dies, and especially power and etc to make em work will take MUCH space.So all this should be integrated into a single SoC
Undoubtly it does. It still might be possible however. The flip side of the board might still gotten enough space to do and if the SoC is unused it could be in kept in deep sleep using next to no current or even being unpowered at all so it wouldn't affect general runtime.

Powered on its likely that the offloading removes the stress from the ARM so its amp draw lessens. If both are at max load the amp draw to battery could be dangerously high thats right... dunno how to deal with that.

x86 emulation is a HEAVY deal if they dont wanna lose the entire ARM Server market to AMD.
To date this market is pretty small but probably will grow very quickly if the newly released ARM Opterons are power efficient enough to justify a switch. 25W TDP for an octacore A57 sounds promising :) And no. No need for any emulation at all. Windows won't make the switch and linux server software will be just recompiled.

Offloading x86 to FPGA is nice, but doesnt feel enough. A proper facility, to properly execute both x86 and ARM code same time should be implemented.This may give us a "Multi-Arch" in the near future. A cpu which practically executes both codes natively, even if it wasnt 100% optimized for the second one.

I agree that that would cover everything major necessary as in running the best from both worlds and leveraging both mobile (Android) and desktop (PC) software. However an FPGA still could be just used as an remedy to those awfully emulatable custom chips like the RSX in the N64 (remember things like the Factor5 games) or possibly a cycle exact audio/blitter emulation of Amiga. Also could be a first venture into a deeper look into the currently unemulatable systems like PS2.

I still wonder btw if the lingering 64bit support in upcoming ARM/Intel chips could benefit emulation as well... the RSX for instance was a 64 bit part.

ZetaNeta got +9000 to memories about what Transmeta been doing...
Transmeta was far ahead its time. Too bad they initially started in a tightly controlled market (by Intel) which left very little room for innovative products. They might been better to cooperate with IBM and integrated their custom IP into their PowerPC lines.

But it's somehow more likely to convince them to design an entirely new CPU based on ARM that also has x86 emulation instructions?
I don't believe that they even considered such a compatibility. Not my proposal anyways :)
 
Last edited by a moderator:
ZetaNeta, on 03 Mar 2014 - 09:42 AM, said: ZetaNeta got +9000 to memories about what Transmeta been doing... Transmeta was far ahead its time. Too bad they initially started in a tightly controlled market (by Intel) which left very little room for innovative products. They might been better to cooperate with IBM and integrated their custom IP into their PowerPC lines.
I actually tested the Crusoe modules at IBM way back when. This is when I ran the testers instead of writing the test program code for them like I do today. Transmeta never could get the speed they wanted out of the parts, but yes they were ahead of their time.
 
Last edited by a moderator:
I don't believe that they even considered such a compatibility. Not my proposal anyways
I don't believe that either. This entire conversation is entirely within the context that ZetaNeta thinks CPU architects should add x86 emulation instructions to their ARM processors. My counter is that that is stupid, that if they WERE to do anything they'd be better off including an x86 CPU to produce a fully heterogeneous SoC.
 
About the space, i meant that placing 2 dies, and especially power and etc to make em work will take MUCH space.So all this should be integrated into a single SoC
 
Undoubtly it does. It still might be possible however. The flip side of the board might still gotten enough space to do and if the SoC is unused it could be in kept in deep sleep using next to no current or even being unpowered at all so it wouldn't affect general runtime.


Powered on its likely that the offloading removes the stress from the ARM so its amp draw lessens. If both are at max load the amp draw to battery could be dangerously high thats right... dunno how to deal with that.

x86 emulation is a HEAVY deal if they dont wanna lose the entire ARM Server market to AMD.
To date this market is pretty small but probably will grow very quickly if the newly released ARM Opterons are power efficient enough to justify a switch. 25W TDP for an octacore A57 sounds promising :) And no. No need for any emulation at all. Windows won't make the switch and linux server software will be just recompiled.


Offloading x86 to FPGA is nice, but doesnt feel enough. A proper facility, to properly execute both x86 and ARM code same time should be implemented.This may give us a "Multi-Arch" in the near future. A cpu which practically executes both codes natively, even if it wasnt 100% optimized for the second one.
 

I agree that that would cover everything major necessary as in running the best from both worlds and leveraging both mobile (Android) and desktop (PC) software. However an FPGA still could be just used as an remedy to those awfully emulatable custom chips like the RSX in the N64 (remember things like the Factor5 games) or possibly a cycle exact audio/blitter emulation of Amiga. Also could be a first venture into a deeper look into the currently unemulatable systems like PS2.


I still wonder btw if the lingering 64bit support in upcoming ARM/Intel chips could benefit emulation as well... the RSX for instance was a 64 bit part.

ZetaNeta got +9000 to memories about what Transmeta been doing...
Transmeta was far ahead its time. Too bad they initially started in a tightly controlled market (by Intel) which left very little room for innovative products. They might been better to cooperate with IBM and integrated their custom IP into their PowerPC lines.


But it's somehow more likely to convince them to design an entirely new CPU based on ARM that also has x86 emulation instructions?
I don't believe that they even considered such a compatibility. Not my proposal anyways :)
Yeah... sounds like a solution, but remember that pandoras PCB didnt allow much components on it.


And.... who cares about the battery? Batteries are fun! I dont think it will reach some extreme level.


Market will grow of course, but the thing is that some people will still benefit for having x86 run in such a way.


First, proprietary software, proprietary software never changes.


Second, proprietary software, proprietary software which never changes independent under what number you put it.


One does not just take and run Oracle on ARM. One hardly get Oracle to FreeBSD.


Did you also think about ARM workstations? Their market is growing much faster than Wingdows gets older. (Reference to xbill)


And having Wine utilize that feature is quite a win.


And quite alot of linux games are proprietary: KAG, Steam, xplane, OGAT.... Feel the pain on running them on QEMU?


x86 is a old zombie hooker.... Not even a lifetime is needed to get rid of it.


Yeah... Transmeta failed at choosing it way. Even Linus didnt help them........ what a genius idea we had lost for all those years?!?!

BACK_TO_THE_FUTURE-0.jpg


I don't believe that they even considered such a compatibility. Not my proposal anyways
I don't believe that either. This entire conversation is entirely within the context that ZetaNeta thinks CPU architects should add x86 emulation instructions to their ARM processors. My counter is that that is stupid, that if they WERE to do anything they'd be better off including an x86 CPU to produce a fully heterogeneous SoC.
What made you think that out conversation is strictly about emulating? x86 in SoC would be good, and no one said anything bad about it.


And i agree that this is how it should be made. My idea was just including x86 in ARM SoCs in general, no matter of how its gonna happen.
 
What made you think that out conversation is strictly about emulating?
Because first you said
A SoC with Hardware-assisted Visualization is preferred. (x86 of course)
To which I tried to correct you that virtualization has nothing to do with x86.You then said

But even if it is emulation, hardware-assisted emulation of x86 on ARM still seems like a useful feature.
To which I said it is more likely (and probably better overall) to get a heterogeneous SoC with both an ARM and an x86 on than it is to re-engineer the ARM processor to somehow have x86 emulation instructions.You disagreed for whatever reasons, something about size restrictions I guess. Somehow sticking an entire secondary SoC on the board requires less space than just an extra CPU in the existing SoC, I guess? I don't understand your arguments at all.
 
I don't think propriertary stuff is going away quickly but its amount of influence lessened quite a bit in the last years. E.g. more free and less unfree software used by people. That said i think the needs for emuation/virt is huge. Look like the WP mess... Android/WP dual boots just to give WP users some bits of the Android software base (which is of course huge) :Z

x86 is an pretty much oldtimer but i don't think that its success factor was being good. I dare to suggest that the inventions of BIOS, BabyAT, ATX, IBM-compatibles & USB have been much more of and x86 just been good enough. Modular and interchangable concepts (like Google's Ara) seems the way to go. Its bringing different vendors and manufactures onto the same table and start a cooperation and challenge between them. 

Specifically on x86: it still shines when it comes to code density. I hadn't had the chance to take a deeper look on x32 arch (mostly because i don't know any adopters as of yet) but it seems promising.
 
Last edited by a moderator:
What made you think that out conversation is strictly about emulating?
Because first you said
A SoC with Hardware-assisted Visualization is preferred. (x86 of course)
To which I tried to correct you that virtualization has nothing to do with x86.
You then said

But even if it is emulation, hardware-assisted emulation of x86 on ARM still seems like a useful feature.
To which I said it is more likely (and probably better overall) to get a heterogeneous SoC with both an ARM and an x86 on than it is to re-engineer the ARM processor to somehow have x86 emulation instructions.
You disagreed for whatever reasons, something about size restrictions I guess. Somehow sticking an entire secondary SoC on the board requires less space than just an extra CPU in the existing SoC, I guess? I don't understand your arguments at all.
For me,  re-engineering the ARM and just putting a x86 are not really far away from each other. I said that putting another CPU on the board separately would be bad.

I don't understand your arguments at all.
Nothing here but post count inflation. 
For a sec. i though that you quoted that from another post...

I don't think propriertary stuff is going away quickly but its amount of influence lessened quite a bit in the last years. E.g. more free and less unfree software used by people. That said i think the needs for emuation/virt is huge. Look like the WP mess... Android/WP dual boots just to give WP users some bits of the Android software base (which is of course huge) :Z

x86 is an pretty much oldtimer but i don't think that its success factor was being good. I dare to suggest that the inventions of BIOS, BabyAT, ATX, IBM-compatibles & USB have been much more of and x86 just been good enough. Modular and interchangable concepts (like Google's Ara) seems the way to go. Its bringing different vendors and manufactures onto the same table and start a cooperation and challenge between them. 

Specifically on x86: it still shines when it comes to code density. I hadn't had the chance to take a deeper look on x32 arch (mostly because i don't know any adopters as of yet) but it seems promising.
A feature is easy to implement, and "sounds good" in the marketing... We are just wondering why no one had made it?
 
A feature is easy to implement, and "sounds good" in the marketing... We are just wondering why no one had made it?
Because it wasn't necessary as the defacto standard is and was "Android" which everything was benchmarked for in the ARM world. With the current surge into the server business more stuff is necessary like SBSA https://lwn.net/Articles/584123/ standardization is much more needed than the opportunistic "runs android... everything is fine" (cudoz who finds any parallelism in the past PC market) but having interchangable parts.

VIA tired some stuff... bringing their Wondermedia chip line (pretty low end ARMs) into sub ATX form factor (dubbed Neo ITX) but they didn't made a significant splash as they might offer a form factor easily recognized but still no basic "replacable parts like PC" idea. Most importandly their own line was as customized as every other ARM vendor is and with no sign of a common denominator. (VIA christened that the APC btw)

So did we agree that a 2nd SoC would be a viable solution to span more compatibility and/or having the possibliity to grow the software base as fit? Did we agree that modularization is a key virtue ?
 
Last edited by a moderator:
The Pyra can't possibly run any of the big, complex 3D games, even if they would be ported to ARM-Linux, so it's not really that interesting for many games/-devs. 

And a handheld for streaming probably doesn't need a high-end SoC. Plus, Valves streaming thing is called "in-home streaming" (= stream in a LAN), and you don't really need a handheld (with a huge battery) if you're staying indoors anyways. And if you try to stream in a WAN, the latency will be too high, except if you have really good internet – which is usually not the case if you're using public hotspots or 3G...

Overall, while I do think that Valve may find the Pyra a interesting idea, I don't think that they have any use for it .—.
What about the Shield? That can stream, and it's a handheld device. There must be SOME market for it.
 
Back
Top