I was looking at emulators and came across PocketNES, which emulates an 8-bit Nintendo on a 16.78 MHz Game Boy Advance, and started wondering how they did that.
Unlike other optimized emulators of the same era, it does not use dynamic recompilation. This is surprising because the TuxNES developers claimed that this technique gave a 7x speedup over an interpreter. (See http://sourceforge.net/mailarchive/message....O413%403e8.org ) With this optimization, TuxNES ran about a third the speed of an equivalently-clocked native CPU (ie it would have taken a 5.37MHz CPU just to emulate the 1.79MHz NES CPU, without graphics rendering)
That kind of speed would make NES emulation on a GBA possible, but PocketNES does not use a dynamic recompiler, it uses a regular fetch-and-execute-each-instruction interpreter. So how'd Loopy and Flubba do it?
The ARM7TDMI has a 3-stage pipeline, so the instruction lookup plus branch only takes 4 CPU cycles, which is what you would save by using a recompiler. (This would take many more cycles on a Cortex-A8.) Still when you add the cycle count, flag setting, and emulating the actual instruction, you're up to a minimum of 7 cycles per instruction using the interpreter, and probably more like 10 on average.
Anyone know what percent of the CPU time it actually uses? It's got to be using at least half just to emulate the 6502, which doesn't leave much to do the graphics.
Unlike other optimized emulators of the same era, it does not use dynamic recompilation. This is surprising because the TuxNES developers claimed that this technique gave a 7x speedup over an interpreter. (See http://sourceforge.net/mailarchive/message....O413%403e8.org ) With this optimization, TuxNES ran about a third the speed of an equivalently-clocked native CPU (ie it would have taken a 5.37MHz CPU just to emulate the 1.79MHz NES CPU, without graphics rendering)
That kind of speed would make NES emulation on a GBA possible, but PocketNES does not use a dynamic recompiler, it uses a regular fetch-and-execute-each-instruction interpreter. So how'd Loopy and Flubba do it?
The ARM7TDMI has a 3-stage pipeline, so the instruction lookup plus branch only takes 4 CPU cycles, which is what you would save by using a recompiler. (This would take many more cycles on a Cortex-A8.) Still when you add the cycle count, flag setting, and emulating the actual instruction, you're up to a minimum of 7 cycles per instruction using the interpreter, and probably more like 10 on average.
Anyone know what percent of the CPU time it actually uses? It's got to be using at least half just to emulate the 6502, which doesn't leave much to do the graphics.
Last edited by a moderator: