Pandora outdated?


ED, your statement about A15 is not correct either. Samsung is launching the dual core Exynos 5250 next quarter (Q2, dual core A15 on 32nm node). Tegra 4 will also be out before year's end if Nvidia stays on schedule, and that will feature A15 cores on a 28nm node. Then OMAP 5 and NovaThor A9600 will hit next year.


It's almost important to note that Qualcomm's new Krait architecture is essentially a hybrid of the A9 and A15 architectures. It can triple issue instructions and is rumored to have 3.2 DMIPS/Mhz compared to 3.5 DMIPS/Mhz of the A15. (A9 was 2.5 DMIPS/Mhz).
 
Last edited by a moderator:
To sum it all up: people have different ideas about what constitutes "outdated" hardware but while the Pandora SoC could be called outdated, the Pandora itself (as a system) can NOT, because it has not been succeeded by a newer palm-size multipurpose gaming-console/computer.


It seems important to me to emphasize the distinction between a -system-component- and a -system-. Pandora owners are finding out that their devices are in a class of their own and like Psion owners of old, will be using them for a long, long time.
 
I think this post is wishful thinking for the most part. I have a Pandora and I just got a $56 JXD and the snes and n64 with the 1ghz arm9 is a hell of a lot better than Pandora. I really didn't buy Pandora for n64 and while these cheap handhelds don't directly complete there is no denying that a JXD can now do a lot of emulation really really well. So this crap hardware is cheap and no so crap an more. I'd say it's about time to consider a serious upgrade for the Pandora when a $56 handheld can compete in most areas
 
How much does the ethical/European production angle affect the price?


Local production by people who are paid properly in Germany is a somewhat unusual aspect of the pandora.


The pandora project is very lucky that there are no direct competitors.


Hopefully the P2 with its interchangeable processors will solve the "outdated" problem, while maintaining the unique aspects of the P1
 
Last edited by a moderator:
I think this post is wishful thinking for the most part. I have a Pandora and I just got a $56 JXD and the snes and n64 with the 1ghz arm9 is a hell of a lot better than Pandora. I really didn't buy Pandora for n64 and while these cheap handhelds don't directly complete there is no denying that a JXD can now do a lot of emulation really really well. So this crap hardware is cheap and no so crap an more. I'd say it's about time to consider a serious upgrade for the Pandora when a $56 handheld can compete in most areas
Can I ask what JXD you have? I'd love to have a pandora but it's simply too expensive. This sounds interesting.
 
I thought today's dilbert fit some of the conversation going on here eerily well:


152554.strip.gif

Awkward after the success of the Note, Nexus etc.
 
"Emus wouldn't get a significant boost from dual-core"


what about nintendo ds or dreamcast emu which would be sooo demanding in terms of.power?
 
"Emus wouldn't get a significant boost from dual-core"


what about nintendo ds or dreamcast emu which would be sooo demanding in terms of.power?

There isn't a simple way (ie a way that can be ported quickly) to run the CPU emulation over two cores, it's unlikely anyone is going to rewrite such a task to make use of dual/quad cores. Has anyone done that on Android or on a PC emulator yet?


Even tying multiple CPU emulation over multiple cores is tricky and again it's unlikely anyone is going to spend the time doing it for no reward.


Our only realistic solution is just much more power to do it with one core.
 
Last edited by a moderator:
I have a Pandora and I just got a $56 JXD and the snes and n64 with the 1ghz arm9 is a hell of a lot better than Pandora.
I can't imagine SNES being any better than on the Pandora, it seems absolutely flawless to me.


But regardless, you are correct: if you look at the Pandora strictly as a game machine (which you seem to based on your point about SNES and N64 being much better) then you're going to be disappointed. That's not what this thread has been about. When there exists another device which I can pull out of my pocket and comfortably ssh into a server to admin it that can also play games and emulators, then we'll talk about the Pandora being outdated; until then, it's the only thing we've got and therefore can't be considered outdated.
 
The second (or third, or fourth) core doesn't help emulation much at all, unless you figure out a way to parallelize emulation effectively. It is probably either impossible to do, or significantly more work than to re-implement all games for those platforms so they don't need to be emulated anymore.


More than one core is nice for multi-tasking and for things that can be parallelized effectively. Alas, most of the easy targets for parallelization can already be done by the GPU anyway.


The extra cores are nice, but in practice you're way better off with a single 2GHz core than with, say, four 1GHz cores. Of course in terms of marketing, it works nicely: people tend to assume that 4 times a 1 GHz core equals one 4 GHz core. That's not how it works at all: even with things that are parallelizable, linear parallelization is nearly never obtainable.
 
I think it depends. Sound emulation can be done by an extra core pretty easy. Graphic emulation is done by the gpu today i think. Recompiling the game code... not sure normal compiler can use more then 1 core. Executing the game code... i think this is for single core console nearly impossible.
 
Wouldn't a second core help by handling background tasks (and give the emu a whole core to itself) ?
 
I think it depends. Sound emulation can be done by an extra core pretty easy. Graphic emulation is done by the gpu today i think. Recompiling the game code... not sure normal compiler can use more then 1 core. Executing the game code... i think this is for single core console nearly impossible.

It's the syncing it all up that no one wants to do - and these days it seems no one writes emulators from scratch. As we have seen with the Pandora people are not very willing to do this much work for no benefit.


Even on Android where there is major money to be made by good emulators few people are willing to take the challenge.
 
Last edited by a moderator:
There isn't a simple way (ie a way that can be ported quickly) to run the CPU emulation over two cores, it's unlikely anyone is going to rewrite such a task to make use of dual/quad cores. Has anyone done that on Android or on a PC emulator yet?

It's relatively trivial to thread software 3D rendering for something like DS emulation (which is what he asked about). Even for emulators that prefer to use hardware 3D there's a lot of wasted CPU time in driver overhead that can be dumped on another core.
 
Even on Android where there is major money to be made by good emulators few people are willing to take the challenge.

Is there even a single emulator that's a lot more than a port with a GUI thrown in and some few optimizations?
 
Is there even a single emulator that's a lot more than a port with a GUI thrown in and some few optimizations?

Yes, there are actual emulator authors that release directly on Android. For instance VGBA, ePSXe, and FPSE. While they didn't start on Android it is the actual emulator authors releasing new versions with a lot of new work for it.
 
Taking advantage of multiple cores for emulation becomes exponentially more difficult when the hardware you are emulating (and by extension the software for that emulated platform) is itself built around a single core design. Apart from any parallel systems the platform may have, such as a dedicated GPU that could be emulated separately, how is one to split such a workload?


Even multicore X86/X64 CPUs still need the software to be able to take advantage of multiple cores. If a piece of software you have runs using a single core, it isn't going to suddenly use multiple cores and run faster just because more cores have appeared.
 
Quite a lot of old systems had multiple general purpose CPUs on board. For example the Megadrive/Genesis had a 68000 and a Z80 coprocessor, although I don't know how much the latter was used in practice outside of the Master System adaptor. Similarly, the DS has an ARM9 and an ARM7 coprocessor, the latter used for GBA compatibility in the original models, and also to handle sound and wi-fi for DS games, according to wikipedia. Like the Megadrive, the Saturn also contained a Megadrive processor, although clocked a little faster and this time handling sound duties, alongside the twin SH2 main processors and two video processors, a peripheral controller and one SH1 processor, although that was tasked with handling the CD drive only, so probably not applicable to an emulator.


Emulators have handled running sounds at the same time as gameplay since pretty much forever, but emulating sound on the same processor as the game does slow the game down, as seen with a number of early emulator releases on the Pandora. But it seems to me that if you can run separate emulated cores on separate processors it's likely to make writing those emulators well and maintaining them simpler (although perhaps I'm underestimating what you can do with multithreading these days).


On another note, since dynamic recompiling seems to be one of the latest emulation techniques, couldn't one core handle dynamically converting the code, while the other actually runs the code, so the system doesn't have to continually pause and translate more code? I'd guess the core handling the recompilation would take longer than the core actually running the equivalent code, so the running core would be idle quite a lot of the time, but I'd have thought being able to convert the next section of code while the current section is still being run, as well as avoiding the cost of context switching would make this idea worth investigating. I'm no expert though, so there's a lot of presumption in this proposal.
 
Quite a lot of old systems had multiple general purpose CPUs on board. For example the Megadrive/Genesis had a 68000 and a Z80 coprocessor, although I don't know how much the latter was used in practice outside of the Master System adaptor. Similarly, the DS has an ARM9 and an ARM7 coprocessor, the latter used for GBA compatibility in the original models, and also to handle sound and wi-fi for DS games, according to wikipedia. Like the Megadrive, the Saturn also contained a Megadrive processor, although clocked a little faster and this time handling sound duties, alongside the twin SH2 main processors and two video processors, a peripheral controller and one SH1 processor, although that was tasked with handling the CD drive only, so probably not applicable to an emulator.


Emulators have handled running sounds at the same time as gameplay since pretty much forever, but emulating sound on the same processor as the game does slow the game down, as seen with a number of early emulator releases on the Pandora. But it seems to me that if you can run separate emulated cores on separate processors it's likely to make writing those emulators well and maintaining them simpler (although perhaps I'm underestimating what you can do with multithreading these days).

Assigning different CPUs to different threads for emulation is tricky because a lot of consoles will have tight synchronization loops, where one CPU expects the other CPU to respond pretty quickly. In the single threaded case the emulation looks like this:



Code:
emulate_cpu(cpu_a, CYCLES);

emulate_cpu(cpu_b, CYCLES);



While in the threaded version each thread would look something like this:





Code:
cpu_thread->done = 0;

emulate_cpu(cpu_thread, CYCLES);

cpu_thread->done = 1;

while(other_cpu_thread->done == 0);


You could use condition variables or some other OS assisted signaling instead, but the problem is that the OS scheduling may be too coarse, so both threads end up spending a lot of time waiting after they're done executing a block of CPU time. And they could have a lot of overhead.


It's not that this code wouldn't still give a good benefit, making the execution more like max(a, B) than (a + B) . But if the typical execution times of a and b are very different (even if the CPUs are the same speed, if their workloads tend to fluctuate a lot) it's something you'll want to do only if you have nothing better to do with the available CPU time, because in those cases one of the threads is going to spend a lot of time waiting on the other. And it's not very power efficient.


Most consoles so far, at least those before the current gen and excluding Saturn, really are of the type where one CPU runs a lot more code than the others.

On another note, since dynamic recompiling seems to be one of the latest emulation techniques, couldn't one core handle dynamically converting the code, while the other actually runs the code, so the system doesn't have to continually pause and translate more code? I'd guess the core handling the recompilation would take longer than the core actually running the equivalent code, so the running core would be idle quite a lot of the time, but I'd have thought being able to convert the next section of code while the current section is still being run, as well as avoiding the cost of context switching would make this idea worth investigating. I'm no expert though, so there's a lot of presumption in this proposal.

In my experience, recompiling doesn't take that much time unless you're constantly flushing all of your translation cache because the games are aggressively modifying code. In those cases you should focus on finding some other strategy of dealing with it. Thing is, when the recompiler needs to be called you don't really know what code isn't going to be needed really soon. And if you can be aggressive in compiling as much as possible directly some other optimization opportunities become available. But the only way to do this is to compile in one big block and not let the emulator start running pieces of the compiled code.
 
Last edited by a moderator:
Back
Top