the open pandora 2 must use an amd apu!


I used to think this too.. a long time ago, before I started programming emulators. I remember looking at a Wonderswan emulator and wondering "why did they emulate this x86 compatible CPU when they could have ran the code directly?" In reality there are a lot of problems to trying to apply virtualization techniques to emulation, and that's why stuff like gpSP on ARM devices still uses a dynamic recompiler like it does on other devices. In fact, I don't think I know of any emulators that use virtualization techniques and can claim high compatibility. The closest I can think of are high-level "real-time" emulators like JPCSP, but they still at least rely on translated code which gives them some added protection.


If you're talking about doing only CPU emulation with virtualization then you need to trap and emulate all hardware accesses. Right now very little is known about the low level operation of XBox hardware. There is an immense amount of RE that'd have to be done to even get off the ground, and it extends far beyond just the GPU. XBox really isn't the bog-standard PC hardware people make it out to be.


Lots of people have come and gone over the years to work on emulators like cxbx. If anything has held back XBox emulation it would be that from the start people were married to the WINE approach and have pursued it because it gave a few very nice results very early. It's a more extreme version of what happened with N64 emulation.
 
What about ginge then? This seems to do the hw virtualization approach quite well between arm platforms.


But this might have been easier as the consoles it virtualizes are open source platforms and a lot is known about them.
 
ginge works well not because more is known about the platforms it's emulating but because it's starting from Linux. This gives you a lot of flexibility that you don't get starting with any old code, and is a big factor as to why user mode emulation in qemu is so much faster than system mode emulation. For instance Linux apps generally don't care that much about where their code, data, stack, heap, etc are allocated, they just care that the memory is valid and there's enough of it. So you're walled off a lot by the high level information present in the executable and the syscalls to the OS itself (and shared libraries, potentially). The emulated code is always in user mode and is fairly safe.


ginge does do some hardware emulation but it's mostly pretty constrained; GP2X and Wiz stuff tends to write some hardware registers to setup video modes then gets a pointer to a framebuffer. Nothing like giving big command lists to graphics accelerators. And even then it's still gated by access to /dev/mem, so you only have to trap memory supplied by /dev/mem instead of trying to trap any old memory access. And the programs it has to worry about are more constrained.. there are just a few dozen at most emulators worth supporting, and the games tend to be ported from other platforms.


Let's look at something like GBA, since it's fairly simple, and since this came up before a lot (remember GP Advance on GP32?). I've made big posts about this before, listing problems/limitations but I don't mind taking a fresh stab at it:


1) You need to default your emulated memory to no-execute in order to determine what's code so you can write protect it. Otherwise you get cache coherency problems with code that's created or modified in RAM. This is a problem for older ARMs that don't have execute protect in their MMU; this is something that was only recently added.


2) You need to mark pages conservatively as write protect if they contain code, and detect modification w/SIGSEGV so you can clear caches. But you end up with false positives where data is modified in the same page as code. On systems with a lot of RAM this might not come up a lot but with something like GBA where there's 32KB of fast RAM it probably happens all the time (with 4KB pages). Even if you do use SIGSEGVs to catch SMC in a dynarec you're still better off there than w/virtualization, because the dynarec can track down to single instruction granularity what's code and what's data. Therefore it can determine in the handler if you really modified code, while a VM has to assume it did and therefore must flush that part of icache/dcache/etc (requires a syscall on ARM = relatively expensive).. on the dynarec, if a write is accessing code pages a lot you can patch it, but on the VM you have more limited ability to do this (more on this later)


3) You can run into problems providing some of the emulated address space. At least some of the emulated code will be working with physical addresses that have to be at fixed locations (and if the emulated machine has/uses virtual addresses they could be anywhere as well, depending on how the OS sets them up). Some addresses are going to simply be unavailable in your emulated space because the OS needs to put kernel, stack, heap, code, shared libraries, etc there. You can mitigate this a bit by messing with the program's link script but there are some things you can't do, or could require using a different OS kernel/kernel options.


4) You have to put the VM code somewhere, preferably where the emulated game CAN'T see it. Pulling off the latter is tricky unless you leverage hardware virtualization (which not everything does, does Medfield even have it?), because if the VM is running in user mode you can't simultaneously protect it and make it runnable.


5) You could have problems patching code to try to work around performance issues.. for instance on ARM branches have a +/-32MB limit. So let's say you did find a good hole in the emulated address space to put your VM code in - if you want to put stubs there that patched code calls this hole needs to be +/-32MB away from the code you're patching. That's why you have less ability to patch code than you do with recompiled code, where you have full control over where you can put that code and the stubs.


6) You need enough hardware virtualization to trap things like returns from interrupts so you can emulate interrupt handlers, and the game needs to think it's in system mode when it isn't really (so you need to trap access to various registers).


7) You have to emulate the game real-time which has several of its own limitations, risks, and implementation headaches that I won't get into just now.
 
tl;dr ;)


Actually, Exo, that was amazingly informative. I figured that you'd need to trap the ram/memory mapped i/o writes and reads (which in it's self can be slow - depending on the approach used), but yeah, totally forgot about the emulated address space holes and hiding your own code elsewhere.


Very interesting, read.
 
It would be cool to have an x64 processor, but I vote NO to x86. It's dying. and x64 is backwards compatible with it.
 
It would be cool to have an x64 processor, but I vote NO to x86. It's dying. and x64 is backwards compatible with it.

what you call x64 is actually x86_64 it's an x86 processor with 64 bit extensions. The AMD z-60 has 64-bit extensions.. so it's x86_64 as well.
 
Last edited by a moderator:
I appro

It would be cool to have an x64 processor, but I vote NO to x86. It's dying. and x64 is backwards compatible with it.

what you call x64 is actually x86_64 it's an x86 processor with 64 bit extensions. The AMD z-60 has 64-bit extensions.. so it's x86_64 as well.

With the Z-60's IGFX we COULD play TF2! WOOOOOOOOO! It would be nice to have an external GPU though (maybe Optimus for linux or some clone?)


with x64 we could run windows 8! JK


I would buy the P2 w/ Z-60 in a heartbeat, so long as they keep the Clamshell design.
 
With the Z-60's IGFX we COULD play TF2! WOOOOOOOOO! It would be nice to have an external GPU though (maybe Optimus for linux or some clone?) with x64 we could run windows 8! JK I would buy the P2 w/ Z-60 in a heartbeat, so long as they keep the Clamshell design.

Just don't expect the same 10/12 hour of actual battery life as the original Pandora, this processor alone draws about 4 times more power than the entire Pandora. They claim it get 8 hours on a battery slightly over twice the size of the Pandora's 4200mAH battery and this estimate may be just idle and not actual gaming use.


Which is why most people in the thread are not that hot on adopting this processor. let alone it switching from an ARM based processor breaks all compatibility with prior Pandora software and such..
 
Last edited by a moderator:
It would be cool to have an x64 processor, but I vote NO to x86. It's dying. and x64 is backwards compatible with it.

Unfortunately, this is what you get with low power Medfield and Clover Trail SoCs from Intel. And it's rumored that Bay Trail-T will still not include x86-64.


IMO in-order processors like the current Atom cores (Bonnell and Saltwell) benefit greatly from x86-64's extra registers. What they need is an OS/ABI that uses 32-bit pointers with x86-64, like x32 in Linux is supposed to do.
 
Last edited by a moderator:
I used to think this too.. a long time ago, before I started programming emulators. I remember looking at a Wonderswan emulator and wondering "why did they emulate this x86 compatible CPU when they could have ran the code directly?" In reality there are a lot of problems to trying to apply virtualization techniques to emulation, and that's why stuff like gpSP on ARM devices still uses a dynamic recompiler like it does on other devices. In fact, I don't think I know of any emulators that use virtualization techniques and can claim high compatibility. The closest I can think of are high-level "real-time" emulators like JPCSP, but they still at least rely on translated code which gives them some added protection.


If you're talking about doing only CPU emulation with virtualization then you need to trap and emulate all hardware accesses. Right now very little is known about the low level operation of XBox hardware. There is an immense amount of RE that'd have to be done to even get off the ground, and it extends far beyond just the GPU. XBox really isn't the bog-standard PC hardware people make it out to be.


Lots of people have come and gone over the years to work on emulators like cxbx. If anything has held back XBox emulation it would be that from the start people were married to the WINE approach and have pursued it because it gave a few very nice results very early. It's a more extreme version of what happened with N64 emulation.
I used to wonder why myself, but after reading this, http://byuu.org/bsnes/accuracy , I understand a bit better. I think the main question is how does one justify not emulating something. At some point the consistency of knowing what is and isn't likely to work makes up for the extra work and extra performance required. And unfortunately, some programs are designed to use known bugs, which means either emulating the thing perfectly or hoping you find all those gotchas out there.

I see your point about Wine, but I personally have mixed feelings about that. The performance is much better, at the cost of being restricted to only X86 and probably in the future AMD64 processors. I think they have managed to update to AMD64 now, but you do potentially lose out in the future if the architecture changes drastically, like Intel wanted to do with Merced.
 
It would be cool to have an x64 processor, but I vote NO to x86. It's dying. and x64 is backwards compatible with it.
what you call x64 is actually x86_64 it's an x86 processor with 64 bit extensions. The AMD z-60 has 64-bit extensions.. so it's x86_64 as well.
Often times, or for the *BSD folks AMD64, even if slightly less informative.
 
If x64 meant 64bit, x86 would meant 86bit which it doesn't, so it doesn't.
 
Ok time for some insights, i am a owner of a razr i, the intel medfield phone. pretty much its a x86 processor with a ported android. So far alot of the google apps have ported to the x86 arch while everything else is running on a ARM emulator. Performance on this thing is amazing. For being on a Android platform this processor is fast, like snapdragon fast. It maybe single core but it does have HT (hyperthreading) and ablity to clock up to 2Ghz. Medfield along with Clover Trail are built at the 22nm size right now while all the current and next years ARM chips are built at 32/28nm, so even if Intel only does the die shrinks at 14nm or even 10nm we would still get the same performance at a lower power cost then what it is now.

I noticed all of you are complaining the fact that its x86 and not x64, yes the memory extensions do help out a bit but its really not enough to switch to that arch and this is a linux device and i would not see the need for more than 2gb of ram anyday. This intel chip has still not seen any performance outside of android, android is a crap OS to run complex code on if you want speed, considering the fact that Android runs on a Java type Virtual Machine. Now this chip has a crappy gpu non of the less, intel's next design the Valleyview will have the new Intel integrated Graphics and there is alot of power behind those chips. Intel might be a good future for Pandora 2, depending on the cost :p
 
Medfield and Clover Trail are 32nm, not 22nm. Atom's dual core Medfield (Z2580) is next in the pipeline, I don't think 22nm Silvermont mobile hardware is just around the corner.

The ability to address the extra memory isn't really the big difference in my opinion, it's the lack of registers that hurts you. It's a bigger deal on an in-order processor like Atom. Try writing some high performance ASM for it sometime and you'll see what I mean.
 
actually medfield and clovertrail are 32nm, supposedly silvermont/valleyview/next atom will move to 22nm and the x86 vs x86_64 argument is worthless unless the device that uses it(a mobile device) will use more than 4gb ram. Any way have you ran into any compatibility issues with arm emulation?
 
And another person who thinks x86 vs x86-64 is just about how much memory you have..
 
thats the most obvious difference, you are a wiz with a more advanced knowledge of these chips and programming them.
Do yourself a favor and read about this one on Wikipedia, it's not exactly esoteric knowledge.

What I'd like to see these platforms adopt is so-called "x32" which uses x86-64 with only a 32-bit process space if that's all the process needs, in order to keep pointers small.
 
Last edited by a moderator:
actually medfield and clovertrail are 32nm, supposedly silvermont/valleyview/next atom will move to 22nm and the x86 vs x86_64 argument is worthless unless the device that uses it(a mobile device) will use more than 4gb ram. Any way have you ran into any compatibility issues with arm emulation?
So far very few, i haven't found any issues with my apps but i don't have much anyways :p
 
thats the most obvious difference, you are a wiz with a more advanced knowledge of these chips and programming them.
Do yourself a favor and read about this one on Wikipedia, it's not exactly esoteric knowledge.

What I'd like to see these platforms adopt is so-called "x32" which uses x86-64 with only a 32-bit process space if that's all the process needs, in order to keep pointers small.
looked into it because I was feeling some shame for my ignorance, all I can see is more memory available, more registers(which is pretty much memory), larger ram and storage capabilities...
 
Back
Top