StarG
Member
In any case... to my knowledge SteamOS is just a vanilla Debian distribution with a custom installer for Steam/Linux. BayTrail/CherryTrail with a fitting EFI / coreboot implementation shouldn't have a problem running it.
In portable devices, which cant afford "dual CPU" if they ever need it.Size of what? What are you on about this time?But size matters quite in most of the cases
Exactly what i meant.Well, Loongson got a limited x86 command translation unit (supports ~200 commands including x87, MMX, SSE1 and SSE2) that is supposed to offer about 70% of the native performance when using QEMU. However, from the little bit of what I've heard about it, it wasn't really that awesome.
Only if you could explain yourself better the first time you post things.In portable devices, which cant afford "dual CPU" if they ever need it.Size of what? What are you on about this time?But size matters quite in most of the cases
Exactly what i meant.Well, Loongson got a limited x86 command translation unit (supports ~200 commands including x87, MMX, SSE1 and SSE2) that is supposed to offer about 70% of the native performance when using QEMU. However, from the little bit of what I've heard about it, it wasn't really that awesome.
Yeah, i find that i didnt speak to people so much, and got so used to the "RAW" format i got there in my mind... i started to forget languages.Only if you could explain yourself better the first time you post things.In portable devices, which cant afford "dual CPU" if they ever need it.Size of what? What are you on about this time?But size matters quite in most of the cases
Exactly what i meant.Well, Loongson got a limited x86 command translation unit (supports ~200 commands including x87, MMX, SSE1 and SSE2) that is supposed to offer about 70% of the native performance when using QEMU. However, from the little bit of what I've heard about it, it wasn't really that awesome.
Most SoCs these days have two CPUs, many (especially the latest) have four, some even have eight. For all the junk in a SoC, the CPU probably takes up very little space on the die. I am 100% certain that adding some alternate processor to produce a homogeneous system would add negligible space.In portable devices, which cant afford "dual CPU" if they ever need it.
I think you are mistaking "dual core, quadricore and eight cores" SOCs with having 2, 4, and 8 CPUs. Cores and CPU are not the same thing at all, and a 8 core processor does not take much more space that a single core processor. It's not like putting 8 CPUs on the die.Most SoCs these days have two CPUs, many (especially the latest) have four, some even have eight. For all the junk in a SoC, the CPU probably takes up very little space on the die. I am 100% certain that adding some alternate processor to produce a homogeneous system would add negligible space.In portable devices, which cant afford "dual CPU" if they ever need it.
Unlikely that TI can be convinced to put an x86 compatible cpu ip (which they don't have ready, for which they don't have a license, for which is hardly any usage case except emulation) into their OMAP line. But you could stuff 2 different SoC on the board. Without any coherency you need a full complement of memory and flash for the 2nd. Just some home-grewn interconnect would be necessary to be devised. Possibly here: http://cdn.opencores.org/downloads/soc_bus_comparison.pdf (an APB-Bus implementation could be possible, page 9)That's true, fair enough, multi-core is not the same as multi-CPU. I shouldn't have drawn that comparison, sorry.
With that in mind, even ignoring multi-core, SoCs are still multi-CPU. Even the Pandora technically has 4 different "CPUs" in it, although only two of them are general purpose and may not count. The OMAP5 has two obvious distinct user accessible CPUs, what if the M4s were "simply" replaced by an x86? Obviously not simple, but you get my meaning. It'd take up more space on the die as the x86 is more complex than an M4 but I'm not sure the size difference would be substantial enough to warrant as a killer argument against using it, at least not when compared to the alternative of designing an ARM CPU with many x86 "emulation" instructions.
But it's somehow more likely to convince them to design an entirely new CPU based on ARM that also has x86 emulation instructions?Unlikely that TI can be convinced to put an x86 compatible cpu ip
About the space, i meant that placing 2 dies, and especially power and etc to make em work will take MUCH space.So all this should be integrated into a single SoC.Most SoCs these days have two CPUs, many (especially the latest) have four, some even have eight. For all the junk in a SoC, the CPU probably takes up very little space on the die. I am 100% certain that adding some alternate processor to produce a homogeneous system would add negligible space.In portable devices, which cant afford "dual CPU" if they ever need it.
The mad PHP machine says i cant like more post today, so i came to say that to you.That's true, fair enough, multi-core is not the same as multi-CPU. I shouldn't have drawn that comparison, sorry.
With that in mind, even ignoring multi-core, SoCs are still multi-CPU. Even the Pandora technically has 4 different "CPUs" in it, although only two of them are general purpose and may not count. The OMAP5 has two obvious distinct user accessible CPUs, what if the M4s were "simply" replaced by an x86? Obviously not simple, but you get my meaning. It'd take up more space on the die as the x86 is more complex than an M4 but I'm not sure the size difference would be substantial enough to warrant as a killer argument against using it, at least not when compared to the alternative of designing an ARM CPU with many x86 "emulation" instructions.
x86 isnt that hard to develop.x86 patent is over 20 years old, and is invalid in most countries already. Licencing? (It still can run 16bit in native, and supports only 1MB of ram on start... i wanna run to MIPS )Unlikely that TI can be convinced to put an x86 compatible cpu ip (which they don't have ready, for which they don't have a license, for which is hardly any usage case except emulation) into their OMAP line. But you could stuff 2 different SoC on the board. Without any coherency you need a full complement of memory and flash for the 2nd. Just some home-grewn interconnect would be necessary to be devised. Possibly here: http://cdn.opencores.org/downloads/soc_bus_comparison.pdf (an APB-Bus implementation could be possible, page 9)That's true, fair enough, multi-core is not the same as multi-CPU. I shouldn't have drawn that comparison, sorry.
With that in mind, even ignoring multi-core, SoCs are still multi-CPU. Even the Pandora technically has 4 different "CPUs" in it, although only two of them are general purpose and may not count. The OMAP5 has two obvious distinct user accessible CPUs, what if the M4s were "simply" replaced by an x86? Obviously not simple, but you get my meaning. It'd take up more space on the die as the x86 is more complex than an M4 but I'm not sure the size difference would be substantial enough to warrant as a killer argument against using it, at least not when compared to the alternative of designing an ARM CPU with many x86 "emulation" instructions.
For offloading emulation an FPGA would be the most logical choice as its generally reconfigurable to whatever needs your current emulator has.
Well, Loongson got a limited x86 command translation unit (supports ~200 commands including x87, MMX, SSE1 and SSE2) that is supposed to offer about 70% of the native performance when using QEMU. However, from the little bit of what I've heard about it, it wasn't really that awesome.
Undoubtly it does. It still might be possible however. The flip side of the board might still gotten enough space to do and if the SoC is unused it could be in kept in deep sleep using next to no current or even being unpowered at all so it wouldn't affect general runtime.About the space, i meant that placing 2 dies, and especially power and etc to make em work will take MUCH space.So all this should be integrated into a single SoC
To date this market is pretty small but probably will grow very quickly if the newly released ARM Opterons are power efficient enough to justify a switch. 25W TDP for an octacore A57 sounds promising And no. No need for any emulation at all. Windows won't make the switch and linux server software will be just recompiled.x86 emulation is a HEAVY deal if they dont wanna lose the entire ARM Server market to AMD.
Offloading x86 to FPGA is nice, but doesnt feel enough. A proper facility, to properly execute both x86 and ARM code same time should be implemented.This may give us a "Multi-Arch" in the near future. A cpu which practically executes both codes natively, even if it wasnt 100% optimized for the second one.
Transmeta was far ahead its time. Too bad they initially started in a tightly controlled market (by Intel) which left very little room for innovative products. They might been better to cooperate with IBM and integrated their custom IP into their PowerPC lines.ZetaNeta got +9000 to memories about what Transmeta been doing...
I don't believe that they even considered such a compatibility. Not my proposal anywaysBut it's somehow more likely to convince them to design an entirely new CPU based on ARM that also has x86 emulation instructions?
I actually tested the Crusoe modules at IBM way back when. This is when I ran the testers instead of writing the test program code for them like I do today. Transmeta never could get the speed they wanted out of the parts, but yes they were ahead of their time.ZetaNeta, on 03 Mar 2014 - 09:42 AM, said: ZetaNeta got +9000 to memories about what Transmeta been doing... Transmeta was far ahead its time. Too bad they initially started in a tightly controlled market (by Intel) which left very little room for innovative products. They might been better to cooperate with IBM and integrated their custom IP into their PowerPC lines.
I don't believe that either. This entire conversation is entirely within the context that ZetaNeta thinks CPU architects should add x86 emulation instructions to their ARM processors. My counter is that that is stupid, that if they WERE to do anything they'd be better off including an x86 CPU to produce a fully heterogeneous SoC.I don't believe that they even considered such a compatibility. Not my proposal anyways
Yeah... sounds like a solution, but remember that pandoras PCB didnt allow much components on it.About the space, i meant that placing 2 dies, and especially power and etc to make em work will take MUCH space.So all this should be integrated into a single SoC
Undoubtly it does. It still might be possible however. The flip side of the board might still gotten enough space to do and if the SoC is unused it could be in kept in deep sleep using next to no current or even being unpowered at all so it wouldn't affect general runtime.
Powered on its likely that the offloading removes the stress from the ARM so its amp draw lessens. If both are at max load the amp draw to battery could be dangerously high thats right... dunno how to deal with that.
To date this market is pretty small but probably will grow very quickly if the newly released ARM Opterons are power efficient enough to justify a switch. 25W TDP for an octacore A57 sounds promising And no. No need for any emulation at all. Windows won't make the switch and linux server software will be just recompiled.x86 emulation is a HEAVY deal if they dont wanna lose the entire ARM Server market to AMD.
Offloading x86 to FPGA is nice, but doesnt feel enough. A proper facility, to properly execute both x86 and ARM code same time should be implemented.This may give us a "Multi-Arch" in the near future. A cpu which practically executes both codes natively, even if it wasnt 100% optimized for the second one.
I agree that that would cover everything major necessary as in running the best from both worlds and leveraging both mobile (Android) and desktop (PC) software. However an FPGA still could be just used as an remedy to those awfully emulatable custom chips like the RSX in the N64 (remember things like the Factor5 games) or possibly a cycle exact audio/blitter emulation of Amiga. Also could be a first venture into a deeper look into the currently unemulatable systems like PS2.
I still wonder btw if the lingering 64bit support in upcoming ARM/Intel chips could benefit emulation as well... the RSX for instance was a 64 bit part.
Transmeta was far ahead its time. Too bad they initially started in a tightly controlled market (by Intel) which left very little room for innovative products. They might been better to cooperate with IBM and integrated their custom IP into their PowerPC lines.ZetaNeta got +9000 to memories about what Transmeta been doing...
I don't believe that they even considered such a compatibility. Not my proposal anywaysBut it's somehow more likely to convince them to design an entirely new CPU based on ARM that also has x86 emulation instructions?
What made you think that out conversation is strictly about emulating? x86 in SoC would be good, and no one said anything bad about it.I don't believe that either. This entire conversation is entirely within the context that ZetaNeta thinks CPU architects should add x86 emulation instructions to their ARM processors. My counter is that that is stupid, that if they WERE to do anything they'd be better off including an x86 CPU to produce a fully heterogeneous SoC.I don't believe that they even considered such a compatibility. Not my proposal anyways
Because first you saidWhat made you think that out conversation is strictly about emulating?
To which I tried to correct you that virtualization has nothing to do with x86.You then saidA SoC with Hardware-assisted Visualization is preferred. (x86 of course)
To which I said it is more likely (and probably better overall) to get a heterogeneous SoC with both an ARM and an x86 on than it is to re-engineer the ARM processor to somehow have x86 emulation instructions.You disagreed for whatever reasons, something about size restrictions I guess. Somehow sticking an entire secondary SoC on the board requires less space than just an extra CPU in the existing SoC, I guess? I don't understand your arguments at all.But even if it is emulation, hardware-assisted emulation of x86 on ARM still seems like a useful feature.
Nothing here but post count inflation.I don't understand your arguments at all.
For me, re-engineering the ARM and just putting a x86 are not really far away from each other. I said that putting another CPU on the board separately would be bad.Because first you saidWhat made you think that out conversation is strictly about emulating?To which I tried to correct you that virtualization has nothing to do with x86.A SoC with Hardware-assisted Visualization is preferred. (x86 of course)
You then said
To which I said it is more likely (and probably better overall) to get a heterogeneous SoC with both an ARM and an x86 on than it is to re-engineer the ARM processor to somehow have x86 emulation instructions.But even if it is emulation, hardware-assisted emulation of x86 on ARM still seems like a useful feature.
You disagreed for whatever reasons, something about size restrictions I guess. Somehow sticking an entire secondary SoC on the board requires less space than just an extra CPU in the existing SoC, I guess? I don't understand your arguments at all.
For a sec. i though that you quoted that from another post...Nothing here but post count inflation.I don't understand your arguments at all.
A feature is easy to implement, and "sounds good" in the marketing... We are just wondering why no one had made it?I don't think propriertary stuff is going away quickly but its amount of influence lessened quite a bit in the last years. E.g. more free and less unfree software used by people. That said i think the needs for emuation/virt is huge. Look like the WP mess... Android/WP dual boots just to give WP users some bits of the Android software base (which is of course huge) :Z
x86 is an pretty much oldtimer but i don't think that its success factor was being good. I dare to suggest that the inventions of BIOS, BabyAT, ATX, IBM-compatibles & USB have been much more of and x86 just been good enough. Modular and interchangable concepts (like Google's Ara) seems the way to go. Its bringing different vendors and manufactures onto the same table and start a cooperation and challenge between them.
Specifically on x86: it still shines when it comes to code density. I hadn't had the chance to take a deeper look on x32 arch (mostly because i don't know any adopters as of yet) but it seems promising.
Because it wasn't necessary as the defacto standard is and was "Android" which everything was benchmarked for in the ARM world. With the current surge into the server business more stuff is necessary like SBSA https://lwn.net/Articles/584123/ standardization is much more needed than the opportunistic "runs android... everything is fine" (cudoz who finds any parallelism in the past PC market) but having interchangable parts.A feature is easy to implement, and "sounds good" in the marketing... We are just wondering why no one had made it?
What about the Shield? That can stream, and it's a handheld device. There must be SOME market for it.The Pyra can't possibly run any of the big, complex 3D games, even if they would be ported to ARM-Linux, so it's not really that interesting for many games/-devs.
And a handheld for streaming probably doesn't need a high-end SoC. Plus, Valves streaming thing is called "in-home streaming" (= stream in a LAN), and you don't really need a handheld (with a huge battery) if you're staying indoors anyways. And if you try to stream in a WAN, the latency will be too high, except if you have really good internet – which is usually not the case if you're using public hotspots or 3G...
Overall, while I do think that Valve may find the Pyra a interesting idea, I don't think that they have any use for it .—.