GP2X Dosbox dynamic recompiler backend


A small update on tests of my static recompiler.

I repeated the previous test - unaligned accesses + letting OS handle them, because I optimized some memory accesses - I folded the immediate offset into the memory access.
For example, instead of
Code:
add madr, eax, #104
ldr eax, [madr]
I used
Code:
ldr eax, [eax, #104]
I got some more speed increase.

The second test was with runtime checking, whether the address is 32bit aligned (for now without stock function for reading/writing unaligned access or other optimizations).
The result was some speed increase over my old code.
The code I used for reading is this (for writing its similar):
Code:
and tmp2, madr, #3
orr tmp2, tmp2, tmp2, lsr #1
bic tmp2, tmp2, #1
add pc, pc, tmp2, lsl #3
NOP
ldr eax, [madr]
b 1f
ldrb tmp1, [madr]
ldrb tmp2, [madr, #1]
orr tmp1, tmp1, tmp2, lsl #8
ldrb tmp2, [madr, #2]
orr tmp1, tmp1, tmp2, lsl #16
ldrb tmp2, [madr, #3]
orr eax, tmp1, tmp2, lsl #24
1:

Results (small / large file):
native: 0m1.740s / 0m37.540s
recompiled old: 0m9.520s / 2m30.380s
recompiled - unaligned + OS: 0m4.080s / 1m10.170s
recompiled - runtime checking: 0m8.520s / 2m15.790s

When I use stock functions for reading/writing and other optimizations (like using more registers), I'll get more speed increase than now. But that requires a lot more work, so I'll test that later.
The speed won't reach the speed of unaligned accesses + OS, but it will be an OS neutral code.
 
M-HT I saw your post at vogons, does this mean your done for now with the dynarec. I was waiting as I thought you might have some more tweaks following your experiments with the memory alignment.
If you think your done I will take the time to add your changes into my pandora dosbox cvs build.
 
Pickle said:
M-HT I saw your post at vogons, does this mean your done for now with the dynarec. I was waiting as I thought you might have some more tweaks following your experiments with the memory alignment.
If you think your done I will take the time to add your changes into my pandora dosbox cvs build.
For now, yes.
When I'll have my Pandora, I'll see if some things can be done to speed it up more.
 
I finished making the changes (in all instructions uses in the test executable) and repeated the test with runtime checking.
As expected, I achieved more speed than before.

Results (small / large file):
native: 0m1.740s / 0m37.540s
recompiled old: 0m9.520s / 2m30.380s
recompiled - unaligned + OS: 0m4.080s / 1m10.170s
recompiled - runtime checking: 0m5.540s / 1m31.940s

It's slower than the combination unaligned accesses + OS, but it's still an impressive speed gain over the old version.
 
Good work. I was wondering how you handled indirect branches and if you special case ret. I was thinking, if you handled calls and returns with native PCs then there are two ways to do it, the obvious way:

call ->
str pc, [esp, #-4]!
b function

ret ->
ldr pc, [esp], #4

And the less obvious way:

call ->
str lr, [esp, #-4]!
bl function
ldr lr, [esp], #4

ret ->
bx lr

Second removes lr as a potential for a temporary and at first glance looks slower. But it would be possible to include several function calls inbetween the str/ldr, or it'd get omitted entirely for leaf functions.

Of course, if the game ever looks at the stack then you're screwed in either scenario.
 
Exophase said:
Good work. I was wondering how you handled indirect branches and if you special case ret. I was thinking, if you handled calls and returns with native PCs then there are two ways to do it, the obvious way:

call ->
str pc, [esp, #-4]!
b function

ret ->
ldr pc, [esp], #4

And the less obvious way:

call ->
str lr, [esp, #-4]!
bl function
ldr lr, [esp], #4

ret ->
bx lr

Second removes lr as a potential for a temporary and at first glance looks slower. But it would be possible to include several function calls inbetween the str/ldr, or it'd get omitted entirely for leaf functions.

Of course, if the game ever looks at the stack then you're screwed in either scenario.
The call looks like this:
Code:
ADR tmp1, after_call
stmfd esp!, {tmp1}
b function
LTORG_CALL
after_call:
LTORG_CALL is a macro that either does nothing or inserts .ltorg
In indirect call "b function" is replaced with "bx register" or read from memory to pc register.

Plain ret looks like this:
Code:
ldmfd esp!, {pc}

If ret also increases esp, then it looks like this:
Code:
ldr tmp1, [esp]
add esp, esp, #(value + 4)
bx tmp1
LTORG_RET
or like this:
Code:
ldmfd esp!, {tmp1}
add esp, esp, #value
bx tmp1
LTORG_RET
or like this:
Code:
ldr tmp1, [esp]
LDR tmp2, =(value + 4)
add esp, esp, tmp2
bx tmp1
LTORG_RET
depending on the value
LTORG_RET is again a macro that either does nothing or inserts .ltorg

Your second (less obvious) way of calling wouldn't work, because some functions expect to have the return address on stack (not only for ret instruction) and also lr is used quite a lot, so it's not good to use it like this.
 
M-HT said:
Your second (less obvious) way of calling wouldn't work, because some functions expect to have the return address on stack (not only for ret instruction) and also lr is used quite a lot, so it's not good to use it like this.

I doubt many functions return in a way other than ret or call in a way other than call. If they do then you probably have other problems.
 
Exophase said:
I doubt many functions return in a way other than ret or call in a way other than call. If they do then you probably have other problems.
You're right, there are not many such functions, but they exist, and I don't want to look for them. That's why I'm not using any tricks here and I'm sticking to x86 way of doing things.
BTW you can make a call by pushing the address on stack and using ret. This is also sometimes used (rarely).
 
I found this page (http://n0p.tonych.info/?DOSBox_PPC:DOSBox_0.72) with PocketPC port of DOSBox 0.72.
The author claims in interesting speed increase over standard DOSBox 0.72 using some techniques which might break game compatibility.
Yesterday, he also uploaded the source code (http://n0p.tonych.info/?Sources), so it might be worth checking it out, whether it would help the gp2x port (or just plainly try using his port on gp2x).
I'll probably do it, but not right now.
 
M-HT are the 2 movies your using the albion ones?
Can I get the dos version of the player, or are using the original that came with the game?
I would like to benchmark dosbox
 
Any hints as to what he's actually doing? I don't feel like diffing anything.
 
Pickle said:
M-HT are the 2 movies your using the albion ones?
Can I get the dos version of the player, or are using the original that came with the game?
I would like to benchmark dosbox
The larger one, that I didn't use for testing dosbox is from Albion - credits.smk
The small one is from Master of Orion 2 - plntdfin.lbx (despite different extension, it's a smk file)
The executable I use is not the player, just the decoder part. It's decoding the file as fast as possible, without waiting or displaying anything.
The first parameter to it is the name of the smk file (testcpud.exe plntdfin.lbx).
I tested dosbox with cycles=2200 (to keep the cpu working at 100%).

Exophase said:
Any hints as to what he's actually doing? I don't feel like diffing anything.
Most of it is written on his page.
The things that I think bring most speedup are:
- memory copy optimizations (rep movsb, rep movsw, rep movsd instructions)
- memory access changes (changed functions that access memory) - this probably depends on the compiler

When I finish what I'm doing now, I'm going to test this on gp2x.
 
M-HT said:
The things that I think bring most speedup are:
- memory copy optimizations (rep movsb, rep movsw, rep movsd instructions)
- memory access changes (changed functions that access memory) - this probably depends on the compiler

When I finish what I'm doing now, I'm going to test this on gp2x.
I tested the memory access changes.
With simple core the speed increase is about 1 %, with dynamic core it's a bit slower than before (negligible difference).

On PocketPC the speed difference may be higher, because n0p is using gcc v3.3.3 (I'm using gcc v4.1.1).

I'll be testing the memory copy optimizations next.
 
M-HT said:
M-HT said:
The things that I think bring most speedup are:
- memory copy optimizations (rep movsb, rep movsw, rep movsd instructions)
- memory access changes (changed functions that access memory) - this probably depends on the compiler

When I finish what I'm doing now, I'm going to test this on gp2x.
I tested the memory access changes.
With simple core the speed increase is about 1 %, with dynamic core it's a bit slower than before (negligible difference).

On PocketPC the speed difference may be higher, because n0p is using gcc v3.3.3 (I'm using gcc v4.1.1).

I'll be testing the memory copy optimizations next.

If you're testing the same things, ie something compiled with a modern compiler, then rep instructions probably aren't used. At the very least they shouldn't be, unless you're targeting a really old x86.
 
Exophase said:
If you're testing the same things, ie something compiled with a modern compiler, then rep instructions probably aren't used. At the very least they shouldn't be, unless you're targeting a really old x86.
I'll be testing it also on a different executable (which uses memory copy instructions). But, while the rep instructions probably aren't used directly in the program (unless it's hand written assembly), they are certainly used in the c library - functions memcpy, memset, ...
 
I tried the memory copy optimizations and even those didn't bring more speed, actually I measured a slight slowdown in my tests.
I don't know if I'm doing something wrong or it really isn't faster than before.
But the point (that I didn't realize before for some reason) is that this optimization affect simple/normal/full core, but not the dynamic core, because that uses a different implementation. So even if these optimizations brought more speed, the dynamic core would still be faster.

One more thing I noticed is, that in my tests the normal core is actually about 7% faster than the simple core.
I always thought that simple core is faster than normal core.
But if this effect is true also for games (and not only for my tests) then normal core shoud be used instead of simple core - of course only if you don't want/can use the dynamic core.
 
Back
Top