Qemuonarm


Exophase: Yes, I often quote DynamoRIO work. It's probably because it's the only work I found that tested many things related to runtime code generation. But I'm well aware I should be cautious not to rely too much on what's in there since it's not a simulator (and also I usually don't accept things if I haven't tested them myself).

The hash algorithm that I use in mupen64plus is: hash_table[((vaddr>>16)^vaddr)&0xFFFF]

The lookup and comparison (in linkage_arm.s) is as follows:
Code:
jump_vaddr:
        eor     r2, r0, r0, lsl #16
        ldr     r1, .htptr
        lsr     r2, r2, #12
        bic     r2, r2, #15
        ldr     r2, [r1, r2]!
        teq     r2, r0
[...]
.htptr:
        .word   hash_table
The writeback is for open hashing?
I think you can gain one instruction this way:

Code:
jump_vaddr:
        eor     r2, r0, r0, lsl #16
        ldr     r1, .htptr
        lsr     r2, r2, #16
        ldr     r2, [r1, r2, lsl #4]!
        teq     r2, r0
Note that it wouldn't be necessarily faster :)

Did you play with various HT sizes? QEMU went for 4096 entries but it doesn't use open hashing.

The reason I did not try to merge blocks in hot paths is that it makes block invalidation more complicated. If a write hits any one of the merged blocks, then you have to invalidate them all.
Does that happen often in n64 programs?

BTW I don't flush the cache every time it modifies an instruction. Since the new code is functionally equivalent to the old code, nothing really bad happens if the old code stays in the i-cache. It still works, it's just slower, but maybe not as slow as flushing the cache.
Good point :) One of the guys working on Google v8 (Javascript engine) told us the same: he could run his code without having to flush.
 
Ari64 said:
I was thinking more along the lines of replacing the hash table with something else entirely, eg a call/ret stack. I'm not sure that would be any better though.
I was thinking about that too. The problem is programs that play with the return value...
 
Last edited by a moderator:
Laurent said:
The hash algorithm that I use in mupen64plus is: hash_table[((vaddr>>16)^vaddr)&0xFFFF]

The lookup and comparison (in linkage_arm.s) is as follows:
Code:
jump_vaddr:
        eor     r2, r0, r0, lsl #16
        ldr     r1, .htptr
        lsr     r2, r2, #12
        bic     r2, r2, #15
        ldr     r2, [r1, r2]!
        teq     r2, r0
[...]
.htptr:
        .word   hash_table
The writeback is for open hashing?
I think you can gain one instruction this way:

Code:
jump_vaddr:
        eor     r2, r0, r0, lsl #16
        ldr     r1, .htptr
        lsr     r2, r2, #16
        ldr     r2, [r1, r2, lsl #4]!
        teq     r2, r0
Note that it wouldn't be necessarily faster :)

Did you play with various HT sizes? QEMU went for 4096 entries but it doesn't use open hashing.
The writeback is because if the key matches then it gets the value from the next word (ldr pc, [r1, #4])

I am wondering why your hash is calculated as two 6-bit hashes instead of 12 bits at once? Seems inefficient.

I was initially concerned that using such a large hash table would result in L2 misses, but that doesn't seem to be happening much so I haven't tried making it smaller.

Oh, and due to the way the pipeline works on the A8, the shift probably isn't any faster, even if it saves an instruction.

Laurent said:
The reason I did not try to merge blocks in hot paths is that it makes block invalidation more complicated. If a write hits any one of the merged blocks, then you have to invalidate them all.
Does that happen often in n64 programs?
Not often, but it can happen. Any program that loads and unloads DLLs is a potential problem.

Laurent said:
BTW I don't flush the cache every time it modifies an instruction. Since the new code is functionally equivalent to the old code, nothing really bad happens if the old code stays in the i-cache. It still works, it's just slower, but maybe not as slow as flushing the cache.
Good point :) One of the guys working on Google v8 (Javascript engine) told us the same: he could run his code without having to flush.
Of course this only works if you're modifying single instructions, or a single cache line.

Laurent said:
Ari64 said:
I was thinking more along the lines of replacing the hash table with something else entirely, eg a call/ret stack. I'm not sure that would be any better though.
I was thinking about that too. The problem is programs that play with the return value...
Do you encounter a modified return value more often than a hash table collision?
 
Last edited by a moderator:
Ari64 said:
Laurent said:
The hash algorithm that I use in mupen64plus is: hash_table[((vaddr>>16)^vaddr)&0xFFFF]

The lookup and comparison (in linkage_arm.s) is as follows:
Code:
jump_vaddr:
        eor     r2, r0, r0, lsl #16
        ldr     r1, .htptr
        lsr     r2, r2, #12
        bic     r2, r2, #15
        ldr     r2, [r1, r2]!
        teq     r2, r0
[...]
.htptr:
        .word   hash_table
The writeback is for open hashing?
I think you can gain one instruction this way:

Code:
jump_vaddr:
        eor     r2, r0, r0, lsl #16
        ldr     r1, .htptr
        lsr     r2, r2, #16
        ldr     r2, [r1, r2, lsl #4]!
        teq     r2, r0
Note that it wouldn't be necessarily faster :)

Did you play with various HT sizes? QEMU went for 4096 entries but it doesn't use open hashing.
The writeback is because if the key matches then it gets the value from the next word (ldr pc, [r1, #4])
I miss something (sorry my brain is again not functioning due to heavy pain): the writeback will write r1+r2 to r1, which won't be the next word in general.

I am wondering why your hash is calculated as two 6-bit hashes instead of 12 bits at once? Seems inefficient.
That's QEMU original hash function, I'm not sure why it's done this way.

Do you encounter a modified return value more often than a hash table collision?
I have not yet reached the point where I can do this kind of analysis. If I were to do it now, I would probably not find a single modification of the return value on the stack given that I only run C code. I guess older software, especially games, were doing such tricks.
 
Last edited by a moderator:
Laurent said:
Ari64 said:
The writeback is because if the key matches then it gets the value from the next word (ldr pc, [r1, #4])
I miss something (sorry my brain is again not functioning due to heavy pain): the writeback will write r1+r2 to r1, which won't be the next word in general.
It's like this:
Code:
        ldr     r2, [r1, r2]!
        teq     r2, r0
        bne     .L12
        ldr     pc, [r1, #4]
The hash table contains key-value pairs, first the virtual address, then the translated address. I didn't put them in separate locations like QEMU.

Laurent said:
Do you encounter a modified return value more often than a hash table collision?
I have not yet reached the point where I can do this kind of analysis. If I were to do it now, I would probably not find a single modification of the return value on the stack given that I only run C code. I guess older software, especially games, were doing such tricks.
Generally anything multithreaded, exception handlers, or position-independent code that gets the instruction pointer from the stack.
 
Last edited by a moderator:
Ari64 said:
It's like this:
Code:
        ldr     r2, [r1, r2]!
        teq     r2, r0
        bne     .L12
        ldr     pc, [r1, #4]
The hash table contains key-value pairs, first the virtual address, then the translated address. I didn't put them in separate locations like QEMU.
So I definitely don't understand:
Code:
        ldr     r2, [r1, r2]!  ; r2' <- mem[r1 + r2]  r1' <- r1 + r2
        teq     r2, r0
        bne     .L12
        ldr     pc, [r1, #4]   ; pc <- mem[r1' + 4] = mem[r1 + r2 + 4]
Is that really what you want? Your key and your pc are not in contiguous locations.

EDIT : OK my brain is definitely out-of-order; sorry.
 
Last edited by a moderator:
Laurent said:
I am wondering why your hash is calculated as two 6-bit hashes instead of 12 bits at once? Seems inefficient.
That's QEMU original hash function, I'm not sure why it's done this way.
I think I understand it now. This is tb_jmp_cache_hash_page and tb_jmp_cache_hash_func, right? It groups the hashes by page so that it can delete all the hashes associated with a page when the TLB changes.

You might need to find another way of doing this for your "hot path" optimization.
 
Last edited by a moderator:
Quick update:
Code:
nbench gcc 4.1.2
==========================ORIGINAL BYTEMARK RESULTS==========================
INTEGER INDEX       : 2.126
FLOATING-POINT INDEX: 0.277
Baseline (MSDOS*)   : Pentium* 90, 256 KB L2-cache, Watcom* compiler 10.0
==============================LINUX DATA BELOW===============================
MEMORY INDEX        : 0.397
INTEGER INDEX       : 0.659
FLOATING-POINT INDEX: 0.154
Baseline (LINUX)    : AMD K6/233*, 512 KB L2-cache, gcc 2.7.2.3, libc-5.4.38

nbench gcc 2.96
==========================ORIGINAL BYTEMARK RESULTS==========================
INTEGER INDEX       : 1.388
FLOATING-POINT INDEX: 0.262
Baseline (MSDOS*)   : Pentium* 90, 256 KB L2-cache, Watcom* compiler 10.0
==============================LINUX DATA BELOW===============================
MEMORY INDEX        : 0.289
INTEGER INDEX       : 0.396
FLOATING-POINT INDEX: 0.145
Baseline (LINUX)    : AMD K6/233*, 512 KB L2-cache, gcc 2.7.2.3, libc-5.4.38

Nothing spectacular, but I start liking the generated code (though it's still not what I want it to be).

And still that x87 and its FIFO register file that haunts me... I'll have to take care of it anyway, but that really frightens me :p

PS - As usual the results above were not obtained on a Cortex-A8. A Pandora @ 500MHz should be 5-10% better.
 
Laurent said:
PS - As usual the results above were not obtained on a Cortex-A8. A Pandora @ 500MHz should be 5-10% better.

How mysterious. I imagine you won't tell me what they were obtained on if I ask. Strange though, I thought you were doing this for Beagleboard..

Actually, was "Cortex-A8" a typo?
 
Last edited by a moderator:
I lack a screen to setup my Beagle :(

The board I'm talking about is a Cortex-A9 board.

EDIT - For the fun, here is how it was when I started:

Code:
==========================ORIGINAL BYTEMARK RESULTS==========================
INTEGER INDEX       : 0.427
FLOATING-POINT INDEX: 0.136
Baseline (MSDOS*)   : Pentium* 90, 256 KB L2-cache, Watcom* compiler 10.0
==============================LINUX DATA BELOW===============================
MEMORY INDEX        : 0.080
INTEGER INDEX       : 0.132
FLOATING-POINT INDEX: 0.075
Baseline (LINUX)    : AMD K6/233*, 512 KB L2-cache, gcc 2.7.2.3, libc-5.4.38
 
nice!

Laurent said:
And still that x87 and its FIFO register file that haunts me... I'll have to take care of it anyway, but that really frightens me :p
no kidding. that abomination has marked me for life - to this day i still think of proper fp register files as 'advanced'.
 
Last edited by a moderator:
I finally started taking care of x87.

I took code from fdlibm for transcendental functions (the libm on the board isn't compiled with VFP) and I started adding support for FP in QEMU which only knew about integers. A side-effect is that there's no fast support for FP80 and this will cause some problems (someone told me some old demos use ld/st 80-bit to speed-up mem copying); on top of that the likelihood of discrepancies for non standard numbers (NaN, ...) is much higher.

Note the board has an A9 that has a much faster VFP than the A8 VFPlite. On the other hand given how much code I have to emit for handling the x87 FIFO, FP ops are rarely back-to-back (though stores are surely too close to the instruction producing the result).

Code:
nbench gcc 4.1.2
==========================ORIGINAL BYTEMARK RESULTS==========================
INTEGER INDEX       : 2.139
FLOATING-POINT INDEX: 0.937
Baseline (MSDOS*)   : Pentium* 90, 256 KB L2-cache, Watcom* compiler 10.0
==============================LINUX DATA BELOW===============================
MEMORY INDEX        : 0.398
INTEGER INDEX       : 0.665
FLOATING-POINT INDEX: 0.520
Baseline (LINUX)    : AMD K6/233*, 512 KB L2-cache, gcc 2.7.2.3, libc-5.4.38


nbench gcc 2.96
==========================ORIGINAL BYTEMARK RESULTS==========================
INTEGER INDEX       : 1.392
FLOATING-POINT INDEX: 0.755
Baseline (MSDOS*)   : Pentium* 90, 256 KB L2-cache, Watcom* compiler 10.0
==============================LINUX DATA BELOW===============================
MEMORY INDEX        : 0.291
INTEGER INDEX       : 0.397
FLOATING-POINT INDEX: 0.419
 
Laurent said:
I lack a screen to setup my Beagle :(

The board I'm talking about is a Cortex-A9 board.

Why do you need a screen to set up your beagleboard? You can still use the serial console even if the screen is not plugged in. I do it all the time.

Where did you get that A9 board?
 
Last edited by a moderator:
Hrm, so these results will be much better than what we're going to get on the Pandora. That's a tad annoying. Also I didn't think the A9's were in use in anything yet...
 
Laurent clearly has hardware that's probably not available to the general public. He has said that the OMAP3530 would actually perform a bit better, because this is only clocked at 400MHz.

What I'm concerned about is that unless he's scheduling the recompiled code (I doubt it, but I'd love to hear otherwise) then instructions won't be paired well for the A8. This is in contrast to the output of a compiler like GCC which will at least make some attempt at properly scheduling. A9's out of order architecture negates this. It could be giving a bigger boost vs A8 than is being accounted for.
 
You'll understand I can't comment too much on all that :)

I don't intend on doing any instruction scheduling except where it's easy, where easy means: doesn't cost too many codegen cycles.
Anyway given the number of ld/st I currently have, pairing will be low.

BTW it is a known fact that register allocation and scheduling interact highly especially for in-order CPUs. And for sure, I won't change the reg alloc just to have good scheduling on A8 (except perhaps by trying to use as many ARM regs as possible by not reusing them when they're dead, I'll take a look later).
 
Laurent said:
I don't intend on doing any instruction scheduling except where it's easy, where easy means: doesn't cost too many codegen cycles.
Anyway given the number of ld/st I currently have, pairing will be low.

BTW it is a known fact that register allocation and scheduling interact highly especially for in-order CPUs. And for sure, I won't change the reg alloc just to have good scheduling on A8 (except perhaps by trying to use as many ARM regs as possible by not reusing them when they're dead, I'll take a look later).
Lots of stores are a problem. It's surprisingly easy to fill the write buffers on the A8. I don't know if the A9 is any better in this regard.

I wrote code to flag dead registers (look at the unneeded_registers function). This can be used to avoid writing registers to memory, although I went for the slightly more risky approach of dropping them entirely. This is only risky in the case where you have an interrupt and some registers are missing from the CPU state, so it doesn't really break anything.
 
Last edited by a moderator:
Ari64 said:
I wrote code to flag dead registers (look at the unneeded_registers function). This can be used to avoid writing registers to memory, although I went for the slightly more risky approach of dropping them entirely. This is only risky in the case where you have an interrupt and some registers are missing from the CPU state, so it doesn't really break anything.

Hope you're not emulating an OS performing task switching..
 
Last edited by a moderator:
In case it's not clear what I meant, here's a theoretical example:

Code:
 mul %ebx
 mov %eax,(%esi)
 mov (%edi),%edx

edx is written by the first instruction but never used. It gets overwritten by the third instruction. So it's unneeded, and on ARM it's possible to use mul instead of umull for this.

The only problem is what happens when the second instruction causes a pagefault. To enter the interrupt handler with the correct values, you'd have to recalculate edx.

In mupen64plus I don't bother fixing such cases, the registers end up with bogus values during interrupts, and it doesn't ever cause problems in practice.


Exophase said:
Hope you're not emulating an OS performing task switching..
I am, but as long as it returns to the same point after the task switch, it's okay.
 
Last edited by a moderator:
Back
Top