Mupen64Plus


ari64

Magic Emulator Fairy
Joined
Aug 26, 2009
Messages
560
I have rewritten the dynamic recompiler for Mupen64plus to generate ARM code. This will run on OpenPandora, TouchBook, and BeagleBoard.

Unfortunately OpenGL acceleration does not work. It is possible to use software rendering, but it is very slow and some of the textures display incorrectly. Since several people are working on OpenGL ES libraries, I am posting this as-is for testing purposes.

Running the emulator requires approximately 100MB free memory. Large ROMs, or games that use the memory expansion, require slightly more. Be sure to close other applications first.

Because the emulator is not in a usable state, I have not made any attempt to test the controls. However, the mupen64_input.so plugin seems to work. blight_input does not. You will need to set the appropriate plugin.

Compiling mupen64plus (or at least rice_video) requires more than 256MB RAM. You will need to create and activate a swap partition on an external hard disk before compiling mupen64plus.

If you are using Angstrom, you will need to install the following packages: gcc, g++, make, pkgconfig, libsdl-ttf-dev, gtk+-dev, libgl-dev, libglu-dev, xserver-xorg-extension-glx

Some of the plugins have x86 assembly code which will not build on ARM. After building mupen64plus, do "make plugins NO_ASM=1" to build these plugins.

The mesa-7.2 package in the current version of Angstrom is broken. If you wish to use the software rasterizer, you will need to compile mesa from source, then copy swrast_dri.so into /usr/lib/dri/

The original dynamic recompiler in Mupen64 used far too much memory to run on the Pandora because it retained all of the translated code and metadata. To limit memory usage, it now allocates a 16MB buffer for code translation, and if this gets full, old blocks are removed to make space. 16MB appears to be sufficient for most games without causing excessive thrashing. This code is in r4300/new_dynarec/.

http://dl.free.fr/tmkLzOpvB


Edit: Updated version here. OpenGL ES plugin here.
 
Last edited by a moderator:
What a first post man :lol: :D
I don't know how to help but its nice to see that Mupen can be ported.
 
Finally someone giving n64 emulation a shot. I don't know what will come of this particular project, but if nothing else it's at least PoC.
 
Sounds great, I can't really help but I can say that not everybody can read french and that site is very, very slow, so I uploaded the files somewhere faster:

http://hotfile.com/dl/11090512/8190d1e/mupen64plus-arm-20090829.tar.gz.html
 
Hi Ari,

Could you give us an idea of the kind of PC based system this needs to run? I'd be interested to know the sort of speeds required.

It should be possible to get the openGL going, Pickle might be able to give some advice there.
 
fischju2000 said:
Sounds great, I can't really help but I can say that not everybody can read french and that site is very, very slow, so I uploaded the files somewhere faster:

http://hotfile.com/dl/11090512/8190d1e/mupen64plus-arm-20090829.tar.gz.html
I originally tried to upload to mediafire, but it was even slower.
 
Last edited by a moderator:
Yeah, this is a job for the illustrious Pickle. We just need another virgin sacrifice to bring him out of hiding.
 
I don't suppose there's any way I could get you to post MIPS to ARM block comparisons? I love those.

If you can remove a block in isolation as opposed to flushing the entire cache on overflow then it must mean that the recompiler is not directly linking blocks, which is a pretty big red flag for performance :/
 
craigix said:
Hi Ari,

Could you give us an idea of the kind of PC based system this needs to run? I'd be interested to know the sort of speeds required.

It should be possible to get the openGL going, Pickle might be able to give some advice there.
Compatibility should be roughly the same as the original mupen64plus, I just replaced the dynamic recompiler. It will compile and run on x86. It will work on x86-64 too, although it only generates 32-bit code. I didn't really try to optimize x86-64 since my focus was on ARM. It only uses around 15-20% cpu time on my core 2, so there probably isn't much to be gained from optimizing for this type of CPU anyway.
 
Last edited by a moderator:
Exophase said:
I don't suppose there's any way I could get you to post MIPS to ARM block comparisons? I love those.

If you can remove a block in isolation as opposed to flushing the entire cache on overflow then it must mean that the recompiler is not directly linking blocks, which is a pretty big red flag for performance :/
If you define assem_debug in new_dynarec.c, you will get debugging output from the code generator - I assume that's what you want.

It is directly linking blocks, and it does flush the cache. I thought I could get away with not doing so, but the Cortex-A8 has a random replacement policy, which means you're always left with a few old cache lines.

It doesn't flush the cache every time it links a block though. If an old, unresolved branch address is present in the i-cache, then we just end up in dyna_linker again, which has code to check if this happened.
 
Last edited by a moderator:
I was referring to the translation cache, not the Cortex-A8's L1/L2 caches. Maybe I'm misunderstanding this line:

[quote name=Ari64]To limit memory usage, it now allocates a 16MB buffer for code translation, and if this gets full, old blocks are removed to make space.[/quote]

That suggests that you're not flushing the entire translation cache when you run out of space, but only individual blocks. If the blocks are directly linked to each other then this is more or less impossible. I'll have to look at your code, I suppose.

I don't know what debugging output is provided, but I also can't test it myself because I don't have any hardware that can run this. If it does what I want (shows original MIPS blocks vs recompiled ARM blocks) then would it be possible for you to run it and give me some samples? This of course means that it'd have to have an ARM disassembler to be intelligible. I am curious to see what the general quality of the recompiler is, and it's much easier to discern this from examples than from looking at the code.
 
Exophase said:
If you can remove a block in isolation as opposed to flushing the entire cache on overflow then it must mean that the recompiler is not directly linking blocks, which is a pretty big red flag for performance :/
What makes you think removing a translated block implies blocks are not linked? QEMU and DynamoRIO can do that (by chaining block descriptors in the case of QEMU).
 
Last edited by a moderator:
Laurent said:
Exophase said:
If you can remove a block in isolation as opposed to flushing the entire cache on overflow then it must mean that the recompiler is not directly linking blocks, which is a pretty big red flag for performance :/
What makes you think removing a translated block implies blocks are not linked? QEMU and DynamoRIO can do that (by chaining block descriptors in the case of QEMU).

Because if you remove a block you have to remove everything that's linked to it, recursively. Tracking such a thing is not worth it when you're going to end up removing a major chunk of your blocks that way anyway. You'll end up fragmenting memory pretty heavily this way too.

The Dynamo paper made it clear that it flushed the entire cache for these reasons. I don't know if this "RIO" is different in this regard, since I've never heard of it. Also, the description you gave of QEMU's dispatcher makes it clear that this is not happening there either.

Just to be clear, when I say "direct linking" I mean a translated direct branch from one block to another. If it goes through any kind of indirect lookup then it's not direct linking, even if there's a cached value stored somewhere to make it faster than doing a full emulated PC to translated block conversion (plus potential new block compilation). No matter how you slice it, this is going to be considerably slower than the alternative.
 
Last edited by a moderator:
Exophase said:
I was referring to the translation cache, not the Cortex-A8's L1/L2 caches. Maybe I'm misunderstanding this line:

[quote name=Ari64]To limit memory usage, it now allocates a 16MB buffer for code translation, and if this gets full, old blocks are removed to make space.

That suggests that you're not flushing the entire translation cache when you run out of space, but only individual blocks. If the blocks are directly linked to each other then this is more or less impossible. I'll have to look at your code, I suppose.
[/quote]
The 16MB buffer is split up into 2MB regions, and when it needs space, it dumps an entire 2MB region. Before this happens it makes a pass through the entire cache, removing links to this area. This is done incrementally and not all at once, so there isn't a single point in time where we have to stop everything to clean up pointers. The code that does this is at the end of new_recompile_block().

I don't know what debugging output is provided, but I also can't test it myself because I don't have any hardware that can run this. If it does what I want (shows original MIPS blocks vs recompiled ARM blocks) then would it be possible for you to run it and give me some samples? This of course means that it'd have to have an ARM disassembler to be intelligible. I am curious to see what the general quality of the recompiler is, and it's much easier to discern this from examples than from looking at the code.
It basically just prints out every instruction that it disassembles, and every instruction that it generates. This happens before the linker stage, so the branches are unresolved and the literal pool is not generated yet, but it shows all of the instructions. Is there any code sequence in particular you want to see? It generates quite a lot of output.
 
Last edited by a moderator:
Ari64 said:
The 16MB buffer is split up into 2MB regions, and when it needs space, it dumps an entire 2MB region. Before this happens it makes a pass through the entire cache, removing links to this area. This is done incrementally and not all at once, so there isn't a single point in time where we have to stop everything to clean up pointers. The code that does this is at the end of new_recompile_block().

Okay, this is clearer - the 2MB region splits make sense, but the lazy scanning and removing of direct links is not something I'd personally consider doing. Instead I'd prefer to just force direct branches that cross that region to go through indirection. But only profiling would really show which is worth it.

Ari64 said:
It basically just prints out every instruction that it disassembles, and every instruction that it generates. This happens before the linker stage, so the branches are unresolved and the literal pool is not generated yet, but it shows all of the instructions. Is there any code sequence in particular you want to see? It generates quite a lot of output.

I'll take any snippets at all ;D But the more you think it's representative of typical executed code the better. If you have any profiling and give stuff near the top on some popular games that'd be neat.

By the way, I'm very glad that a new recompiler author has joined, and I hope that we'll have lots of interesting discussions in the future.. heheh..

Congratulations on your work so far. I think you're going to end up being very important to the Pandora.
 
Last edited by a moderator:
Exophase said:
Ari64 said:
The 16MB buffer is split up into 2MB regions, and when it needs space, it dumps an entire 2MB region. Before this happens it makes a pass through the entire cache, removing links to this area. This is done incrementally and not all at once, so there isn't a single point in time where we have to stop everything to clean up pointers. The code that does this is at the end of new_recompile_block().

Okay, this is clearer - the 2MB region splits make sense, but the lazy scanning and removing of direct links is not something I'd personally consider doing. Instead I'd prefer to just force direct branches that cross that region to go through indirection. But only profiling would really show which is worth it.
I expect it'd make very little difference overall. Most branches are very short. The MIPS instruction set allows for conditional branches of +/-128K, but in practice they're rarely more than 4K. The only branches longer than this are call/return pairs (JAL/JR r31) for which we mostly end up doing indirection anyway.

Originally I tried to compile fairly large blocks, so that all the local branches would be included. This ended up compiling a lot of useless junk, so now it compiles very small blocks.

There really isn't a good way to optimize call/ret pairs. The calls are fine, these can be directly linked to the target, but the returns are harder. What I ended up doing was to look up the addresses in a hash table. To get a good hit rate on the hash table (~99%) it has to be fairly big, as you're looking at around 10000-15000 active entries for a typical N64 game. I used 65536 bins, with up to 2 entries per bin. To prevent cache misses, it does a PLD (or PREFETCH on x86) instruction on the hash table entry during the JAL, so that the line will be in the cache upon return. However, PLD goes to the L2 cache on Cortex-A8, so it still ends up using 8 cycles for the L1 miss, plus 13 cycles for the branch misprediction, plus a few cycles to calculate the hash.

There are some theoretical ways to optimize this, but they tend to be difficult in practice. One thing is that approximately 50% of subroutines are only called from one location, so you could jump directly back to the caller. The problem here is that often the code is compiled as several small blocks, so by the time you find the return (JR r31) instruction it isn't trivial to find the original entry point. This would work for small subroutines though. For really small subroutines, it would also be possible to inline them when you compile the caller, but this really screws up cache invalidation, since if you invalidate this memory location, you'd have to invalidate everything that inlined it too.

Exophase said:
Ari64 said:
It basically just prints out every instruction that it disassembles, and every instruction that it generates. This happens before the linker stage, so the branches are unresolved and the literal pool is not generated yet, but it shows all of the instructions. Is there any code sequence in particular you want to see? It generates quite a lot of output.

I'll take any snippets at all ;D But the more you think it's representative of typical executed code the better. If you have any profiling and give stuff near the top on some popular games that'd be neat.

By the way, I'm very glad that a new recompiler author has joined, and I hope that we'll have lots of interesting discussions in the future.. heheh..

Congratulations on your work so far. I think you're going to end up being very important to the Pandora.
I don't have any really great examples, here's a bit of the 6105 bootloader:
Code:
  a4000040: ADD r9,r29,r0
ldr r1,fp+360
* a4000044: LW r8,r9+fffff010
sub r0,r1,#-4080
cmp r0,$8388608
bvc 0
ldr r0,r0+0
  a4000048: LW r10,r11+44
ldr r3,fp+216
add r2,r3,#68
cmp r2,$8388608
bvc 0
ldr r2,r2+0
  a400004c: XOR r10,r10,r8
eor r2,r2,r0
  a4000050: SW r10,r9+fffff010
ldr r7,fp+88
sub r4,r1,#-4080
cmp r4,$8388608
bvc 0
str r2,r4+0
ldrb lr,r7,r4 lsr #12
cmp lr,$1
bne 0
  a4000054: ADDI r11,r11,4
str r2,fp+208
asr r2,r2,#31
str r2,fp+212
add r3,r3,#4
  a4000058: ANDI r8,r8,4095
mov r14,#3840
add r14,r14,#255
and r0,r0,r14
  a400005c: BNE r8,r0,a4000044
add r1,r1,#4
cmn r10,#18
bpl 0
tst r0,r0
beq 1
add r10,r10,#16
str r3,fp+216
b 0
add r10,r10,#18
  a4000060: ADDI r9,r9,4
* a4000064: LW r8,r11+44
add r0,r3,#68
cmp r0,$8388608
bvc 0
ldr r0,r0+0
  a4000068: LW r10,r11+48
add r2,r3,#72
cmp r2,$8388608
bvc 0
ldr r2,r2+0
The asterisks mark points where branches will jump to, so the compiler tries to make sure the register mapping is in a reasonable state at these points.

Here's the disassembly after the linker is done:
Code:
0x07000000:	ldr	r1, [r11, #360]
0x07000004:	sub	r0, r1, #4080	; 0xff0
0x07000008:	cmp	r0, #8388608	; 0x800000
0x0700000c:	bvc	0x7000200
0x07000010:	ldr	r0, [r0]
0x07000014:	ldr	r3, [r11, #216]
0x07000018:	add	r2, r3, #68	; 0x44
0x0700001c:	cmp	r2, #8388608	; 0x800000
0x07000020:	bvc	0x7000238
0x07000024:	ldr	r2, [r2]
0x07000028:	eor	r2, r2, r0
0x0700002c:	ldr	r7, [r11, #88]
0x07000030:	sub	r4, r1, #4080	; 0xff0
0x07000034:	cmp	r4, #8388608	; 0x800000
0x07000038:	bvc	0x7000284
0x0700003c:	str	r2, [r4]
0x07000040:	ldrb	lr, [r7, r4, lsr #12]
0x07000044:	cmp	lr, #1	; 0x1
0x07000048:	bne	0x7000270
0x0700004c:	str	r2, [r11, #208]
0x07000050:	asr	r2, r2, #31
0x07000054:	str	r2, [r11, #212]
0x07000058:	add	r3, r3, #4	; 0x4
0x0700005c:	mov	lr, #3840	; 0xf00
0x07000060:	add	lr, lr, #255	; 0xff
0x07000064:	and	r0, r0, lr
0x07000068:	add	r1, r1, #4	; 0x4
0x0700006c:	cmn	r10, #18	; 0x12
0x07000070:	bpl	0x70002bc
0x07000074:	tst	r0, r0
0x07000078:	beq	0x7000088
0x0700007c:	add	r10, r10, #16	; 0x10
0x07000080:	str	r3, [r11, #216]
0x07000084:	b	0x7000004
0x07000088:	add	r10, r10, #18	; 0x12
0x0700008c:	add	r0, r3, #68	; 0x44
0x07000090:	cmp	r0, #8388608	; 0x800000
0x07000094:	bvc	0x7000314
0x07000098:	ldr	r0, [r0]
0x0700009c:	add	r2, r3, #72	; 0x48
0x070000a0:	cmp	r2, #8388608	; 0x800000
0x070000a4:	bvc	0x700034c
0x070000a8:	ldr	r2, [r2]
 
Last edited by a moderator:
Back
Top