Qemuonarm


Laurent said:
I took the oldest compiler I could put my hands on: gcc 2.96. The previous results that were posted are using gcc 4.1.2.
Results on my machine:

Wow, the difference was even bigger than I expected. This really skews nbench results that people have been posting for other things too.

Laurent said:
You're stating the obvious. Is that to make it clear to others, or do you think I'm utterly stupid (which I am sometimes)? :p

First you gave an example that was not really applicable and was probably representative of a worst case scenario. Then you say that things will never be that good. Of course you're going to give the impression that Wine is always going to carry a big cost with it... why even mention your example if it's not something you think will apply? You can't generalize it, but that's what you were doing, or at the very least that's what it looks like you were doing.

Laurent said:
My point was just to show that Wine cost isn't necessarily small. And the OpenGL example isn't that stupid: the layer is extremely thin when compared to some others Windows libraries, since you basically get a one-to-one mapping of the API (although of course OpenGL has its own cost moving data around which might be a problem for OpenGL, but in that case, it was not thanks to the use of VBO).

That's beside the point. It doesn't matter if it's a thin layer if the software is interacting with that layers hundreds of times more than a typical 2D game would interact with its layers. If people were expecting 3D gaming based on your results then they were deluding themselves to begin with. Instead of saying that Wine can have overhead why not find out how much overhead it has for applications more realistic to qemu on ARM? Like some of the games mentioned in this thread.

Laurent said:
Anyway as I wrote above: I'll see how bad things are once I'm happy with QEMU generated code.

Okay, we'll look forward to it.
 
Last edited by a moderator:
Exophase said:
Instead of saying that Wine can have overhead why not find out how much overhead it has for applications more realistic to qemu on ARM? Like some of the games mentioned in this thread.
Given I own none of the mentioned games (AoE, SimCity 2000 and Strarcraft), that's a problem :)
Is there some free game of that era that runs on Windows?

Ha and given I still use a remote board, there's no way I'll be able to test graphics things.
 
Last edited by a moderator:
Laurent said:
Given I own none of the mentioned games (AoE, SimCity 2000 and Strarcraft), that's a problem :)
Is there some free game of that era that runs on Windows?
This works with wine: http://www.blizzard.com/us/starcraft/scdemo.html

Ha and given I still use a remote board, there's no way I'll be able to test graphics things.
Then you can't test games at all?
 
Last edited by a moderator:
Hi guys,

I don't know much about QEMU but I seem to understand from this discussion that its x86 dynamic recompiler is better than Dosbox's for an ARM backend.
I'm not interested in Windows, but does this open up a possibility of faster DOS gaming in any way?

Cheers
 
Laurent said:
Anyway as I wrote above: I'll see how bad things are once I'm happy with QEMU generated code.
Is this with the latest QEMU, or have you made modifications to it? I'm curious what the code generator looks like.
 
Last edited by a moderator:
Ari64 said:
Is this with the latest QEMU, or have you made modifications to it? I'm curious what the code generator looks like.
I have various local modifications, that I'm trying to push into QEMU head. The most important one (block linking fix) has been committed yesterday.

Another one was proposed last week: it makes sure x86 registers can be allocated into ARM regs instead of being read from memory every time they are used in the same block.

And finally another significant one relies on a patch I proposed months ago but that won't make it into mainline: assume standard ld/st won't fault, so don't save temporaries before issuing a ld/st. This could be made to work by catching the fault, regenerate code until you reach the faulty instruction, and find what temporaries contain at this point, thus enabling saving x86 registers back to the context and then jump in the real signal handler. I will probably add this correct behaviour, but not now.
 
Last edited by a moderator:
Laurent said:
Another one was proposed last week: it makes sure x86 registers can be allocated into ARM regs instead of being read from memory every time they are used in the same block.

And finally another significant one relies on a patch I proposed months ago but that won't make it into mainline: assume standard ld/st won't fault, so don't save temporaries before issuing a ld/st. This could be made to work by catching the fault, regenerate code until you reach the faulty instruction, and find what temporaries contain at this point, thus enabling saving x86 registers back to the context and then jump in the real signal handler. I will probably add this correct behaviour, but not now.

These comments suggest that you are relying on dynamic register allocation when translation to ARM should be fully capable of using static global register allocation. Maybe I need more explanation. I can't imagine any other registers that would be live over a fault aside from cached emulated registers. If you still need to save all of the x86 registers, and not just the ones mapped to ARM registers that are caller-save (which is admittedly most of them), then the mapping would be static and wouldn't require any context checking or per-instance code.
 
Last edited by a moderator:
You're right: register allocation is fully dynamic; it's on my TODO list to change that, but I think it should be done along more invasive changes such as not doing instruction per instruction translation, which is a big limitation of QEMU (though flag handling is being done across a block). I am still unsure of what to statically allocate in ARM registers, there aren't enough of them; even if segment registers were not put in ARM regs, that'd mean I would have to work with 7 registers; that's probably enough but will make things slightly more complicated in some places.

As far as saving things across faults goes, QEMU uses temporaries to hold some results, such as partial flags, and these would need to be recomputed in case of a fault (in fact even x86 registers are a form of temporaries).

I'm starting to think that keeping some of the fundamentals of QEMU prevents me from doing things in the most efficient way. But the advantage is that it provides me with a solid basis and spares me the pain of having to fully understand x86 before doing any work :)

To perhaps make things clearer, here is an example of block translation (note the branches are not yet patched when this dump is done).
R7 is a pointer to the context structure which contains x86 registers (offset 0 is EAX) among other things.
Code:
IN: 
0x08048054:  mov    $0xa,%ecx
0x08048059:  call   0x804806d

OP (14):
 ---- 0x8048054
 movi_i32 tmp0,$0xa
 mov_i32 ecx,tmp0

 ---- 0x8048059
 movi_i32 tmp0,$0x804805e
 mov_i32 tmp2,esp
 movi_i32 tmp8,$0x4
 sub_i32 tmp2,tmp2,tmp8
 qemu_st32 tmp0,tmp2
 mov_i32 esp,tmp2
 goto_tb $0x0
 movi_i32 tmp4,$0x804806d
 st_i32 tmp4,env,$0x20
 exit_tb $0x422c1010

OUT: [size=60]
0x001b2190:  e5970010  ldr	r0, [r7, #16]
0x001b2194:  e2400004  sub	r0, r0, #4	; 0x4
0x001b2198:  e308105e  movw	r1, #32862	; 0x805e
0x001b219c:  e3401804  movt	r1, #2052	; 0x804
0x001b21a0:  e5801000  str	r1, [r0]
0x001b21a4:  e300100a  movw	r1, #10	; 0xa
0x001b21a8:  e5871004  str	r1, [r7, #4]
0x001b21ac:  e5870010  str	r0, [r7, #16]
0x001b21b0:  ea000000  b	0x1b21b8
0x001b21b4:  e308006d  movw	r0, #32877	; 0x806d
0x001b21b8:  e3400804  movt	r0, #2052	; 0x804
0x001b21bc:  e5870020  str	r0, [r7, #32]
0x001b21c0:  e3010010  movw	r0, #4112	; 0x1010
0x001b21c4:  e344022c  movt	r0, #16940	; 0x422c
0x001b21c8:  e8bd8f70  pop	{r4, r5, r6, r8, r9, sl, fp, pc}

----------------
IN: 
0x0804806d:  ret    

OP (7):
 ---- 0x804806d
 mov_i32 tmp2,esp
 qemu_ld32u tmp0,tmp2
 movi_i32 tmp8,$0x4
 add_i32 esp,esp,tmp8
 st_i32 tmp0,env,$0x20
 exit_ind_tb $0x0

OUT: [size=88]
0x001b21d0:  e5970010  ldr	r0, [r7, #16]
0x001b21d4:  e5900000  ldr	r0, [r0]
0x001b21d8:  e5971010  ldr	r1, [r7, #16]
0x001b21dc:  e2811004  add	r1, r1, #4	; 0x4
0x001b21e0:  e5870020  str	r0, [r7, #32]
0x001b21e4:  e5871010  str	r1, [r7, #16]
0x001b21e8:  e5970020  ldr	r0, [r7, #32]
0x001b21ec:  e0201320  eor	r1, r0, r0, lsr #6
0x001b21f0:  e2012bfc  and	r2, r1, #258048	; 0x3f000
0x001b21f4:  e201103f  and	r1, r1, #63	; 0x3f
0x001b21f8:  e1811322  orr	r1, r1, r2, lsr #6
0x001b21fc:  e0872101  add	r2, r7, r1, lsl #2
0x001b2200:  e5922390  ldr	r2, [r2, #912]
0x001b2204:  e3520000  cmp	r2, #0	; 0x0
0x001b2208:  15921000  ldrne	r1, [r2]
0x001b220c:  0a000003  beq	0x1b2220
0x001b2210:  e1500001  cmp	r0, r1
0x001b2214:  1a000001  bne	0x1b2220
0x001b2218:  e5920010  ldr	r0, [r2, #16]
0x001b221c:  e12fff10  bx	r0
0x001b2220:  e3000000  movw	r0, #0	; 0x0
0x001b2224:  e8bd8f70  pop	{r4, r5, r6, r8, r9, sl, fp, pc}

----------------
IN: 
0x0804805e:  dec    %ecx
0x0804805f:  jne    0x8048059

OP (21):
 ---- 0x804805e
 mov_i32 tmp0,ecx
 movi_i32 tmp8,$0x1
 sub_i32 tmp0,tmp0,tmp8
 mov_i32 ecx,tmp0
 movi_i32 tmp8,$cc_compute_c
 call tmp8,$0x10,$1,cc_src,cc_op
 mov_i32 cc_dst,tmp0

 ---- 0x804805f
 movi_i32 cc_op,$0x20
 movi_i32 tmp8,$0x0
 brcond_i32 cc_dst,tmp8,ne,$0x0
 goto_tb $0x0
 movi_i32 tmp4,$0x8048061
 st_i32 tmp4,env,$0x20
 exit_tb $0x422c10a8
 set_label $0x0
 goto_tb $0x1
 movi_i32 tmp4,$0x8048059
 st_i32 tmp4,env,$0x20
 exit_tb $0x422c10a9

OUT: [size=112]
0x001b2230:  e5970004  ldr	r0, [r7, #4]
0x001b2234:  e2400001  sub	r0, r0, #1	; 0x1
0x001b2238:  e1a01000  mov	r1, r0
0x001b223c:  e5870190  str	r0, [r7, #400]
0x001b2240:  e5970030  ldr	r0, [r7, #48]
0x001b2244:  e5871004  str	r1, [r7, #4]
0x001b2248:  ebfa823e  bl	0x52b48
0x001b224c:  e5971190  ldr	r1, [r7, #400]
0x001b2250:  e3002020  movw	r2, #32	; 0x20
0x001b2254:  e5872030  str	r2, [r7, #48]
0x001b2258:  e5870028  str	r0, [r7, #40]
0x001b225c:  e587102c  str	r1, [r7, #44]
0x001b2260:  e3510000  cmp	r1, #0	; 0x0
0x001b2264:  1a000006  bne	0x1b2284
0x001b2268:  ea000000  b	0x1b2270
0x001b226c:  e3080061  movw	r0, #32865	; 0x8061
0x001b2270:  e3400804  movt	r0, #2052	; 0x804
0x001b2274:  e5870020  str	r0, [r7, #32]
0x001b2278:  e30100a8  movw	r0, #4264	; 0x10a8
0x001b227c:  e344022c  movt	r0, #16940	; 0x422c
0x001b2280:  e8bd8f70  pop	{r4, r5, r6, r8, r9, sl, fp, pc}
0x001b2284:  ea000000  b	0x1b228c
0x001b2288:  e3080059  movw	r0, #32857	; 0x8059
0x001b228c:  e3400804  movt	r0, #2052	; 0x804
0x001b2290:  e5870020  str	r0, [r7, #32]
0x001b2294:  e30100a9  movw	r0, #4265	; 0x10a9
0x001b2298:  e344022c  movt	r0, #16940	; 0x422c
0x001b229c:  e8bd8f70  pop	{r4, r5, r6, r8, r9, sl, fp, pc}

----------------
IN: 
0x08048059:  call   0x804806d

OP (11):
 ---- 0x8048059
 movi_i32 tmp0,$0x804805e
 mov_i32 tmp2,esp
 movi_i32 tmp8,$0x4
 sub_i32 tmp2,tmp2,tmp8
 qemu_st32 tmp0,tmp2
 mov_i32 esp,tmp2
 goto_tb $0x0
 movi_i32 tmp4,$0x804806d
 st_i32 tmp4,env,$0x20
 exit_tb $0x422c10f4

OUT: [size=52]
0x001b22a0:  e5970010  ldr	r0, [r7, #16]
0x001b22a4:  e2400004  sub	r0, r0, #4	; 0x4
0x001b22a8:  e308105e  movw	r1, #32862	; 0x805e
0x001b22ac:  e3401804  movt	r1, #2052	; 0x804
0x001b22b0:  e5801000  str	r1, [r0]
0x001b22b4:  e5870010  str	r0, [r7, #16]
0x001b22b8:  ea000000  b	0x1b22c0
0x001b22bc:  e308006d  movw	r0, #32877	; 0x806d
0x001b22c0:  e3400804  movt	r0, #2052	; 0x804
0x001b22c4:  e5870020  str	r0, [r7, #32]
0x001b22c8:  e30100f4  movw	r0, #4340	; 0x10f4
0x001b22cc:  e344022c  movt	r0, #16940	; 0x422c
0x001b22d0:  e8bd8f70  pop	{r4, r5, r6, r8, r9, sl, fp, pc}

----------------
IN: 
0x08048061:  mov    $0x0,%ebx
0x08048066:  mov    $0xfc,%eax
0x0804806b:  int    $0x80

OP (13):
 ---- 0x8048061
 movi_i32 tmp0,$0x0
 mov_i32 ebx,tmp0

 ---- 0x8048066
 movi_i32 tmp0,$0xfc
 mov_i32 eax,tmp0

 ---- 0x804806b
 movi_i32 tmp4,$0x804806b
 st_i32 tmp4,env,$0x20
 movi_i32 tmp8,$0x80
 movi_i32 tmp9,$0x2
 movi_i32 tmp10,$raise_interrupt
 call tmp10,$0x0,$0,tmp8,tmp9

OUT: [size=40]
0x001b22e0:  e308006b  movw	r0, #32875	; 0x806b
0x001b22e4:  e3400804  movt	r0, #2052	; 0x804
0x001b22e8:  e5870020  str	r0, [r7, #32]
0x001b22ec:  e3000080  movw	r0, #128	; 0x80
0x001b22f0:  e3001002  movw	r1, #2	; 0x2
0x001b22f4:  e30020fc  movw	r2, #252	; 0xfc
0x001b22f8:  e5872000  str	r2, [r7]
0x001b22fc:  e3002000  movw	r2, #0	; 0x0
0x001b2300:  e587200c  str	r2, [r7, #12]
0x001b2304:  ebfa82aa  bl	0x52db4
 
Looking at that intermediate code makes me want to cry, they could have at least allowed using immediates in far more contexts. At least the end result is propagating that back out, but that really shouldn't be something you have to do.

It doesn't seem like doing static allocation should be a big change, doesn't QEMU allow for this in at least some capacity? IE, "env" is statically allocated. Is getting 8 registers really going to be tricky? You said 7, but the code here is clearly treating esp as a normal register, so it'd be 8 right? What is QEMU using all those registers for? Is it a calling convention issue?

Yes, for 32bit code I see no reason to bother ever allocate the segment registers at all, static or otherwise. For 16bit code I'd probably go for some kind of mixed intermediate form combining SS and SP, keeping DS/ES in an expanded offset form, and not worrying about CS.

Are these tiny blocks very common, or you wanted to show them over larger ones (that'd show some benefit of register allocation and dead flag elimination) for some reason? Either way, I think moving to static allocation will make a huge performance difference, and doing something to make flag generation closer to ARM native might help a little, especially for tight loops.

I'm very curious about the reordering of the ARM output that's going on in the last block (maybe other places, didn't closely enough). Are you performing scheduling?

The exit_ind_tb implementation is a hash table lookup, right? Strange that this gets inlined and not flag generation for a dec. The hash looks a little weak for what you pay for it, do you think it's preferable to a multiplication/shift/and sort of thing?
 
Yes, it's not pretty at the moment :)

As far as immediates go, doing it this way makes TCG smaller and this would ease transition to real SSA (though it's not really needed for SSA).

When I said statically allocating 8 x86 registers, I meant something such as globally saying r00 = eax and so on. So given 16 ARM registers, minus PC, you're left with 7 ARM regs. You also need at least one temporary register to build 32-bit immediates that don't fit in immediate forms of ARM instructions.

I chose tiny blocks just to show how things are (in fact it's the small asm program that helped me discover block linking was broken).

The ind_tb thing is one of my additions. It should indeed be put offline, except that it should return where to jump to in order to help branch prediction. It hits at about 70% on nbench; I will consider changing it once I move to more realistic things than nbench.
 
Laurent said:
Yes, it's not pretty at the moment :)

As far as immediates go, doing it this way makes TCG smaller and this would ease transition to real SSA (though it's not really needed for SSA).

I was thinking that making it smaller was the goal, although in reality this probably makes it bigger. Besides that, size optimization is not what you should be going for in an IMR targeting dynamic recompilation. You compile one block then you're done with it. It's worth it spending extra bytes to make the thing more expressive rather than having to spread a lot of simple functionality over multiple instructions just to combine them back almost all of the time.

Laurent said:
When I said statically allocating 8 x86 registers, I meant something such as globally saying r00 = eax and so on. So given 16 ARM registers, minus PC, you're left with 7 ARM regs. You also need at least one temporary register to build 32-bit immediates that don't fit in immediate forms of ARM instructions.

Okay, so you were talking about instructions remaining. What's the problem? 7 temporaries is a small amount??? What is the "also", the temporaries would be taken from that pool. If not ARM registers and not temporaries then what do you have in mind for these 7? Aside from stack pointer and environment pointer (those two can be merged, but it's probably not worth bothering with for you). QEMU doesn't even count cycles.

Laurent said:
I chose tiny blocks just to show how things are (in fact it's the small asm program that helped me discover block linking was broken).

Those tiny blocks suck for this. I wonder what the typical block size will be for various programs..

Laurent said:
The ind_tb thing is one of my additions. It should indeed be put offline, except that it should return where to jump to in order to help branch prediction. It hits at about 70% on nbench; I will consider changing it once I move to more realistic things than nbench.

Yeah, I agree, inlining the branch to give them separate BTB entries is a good idea. By hit rate you mean the hashing right, not branch prediction. 70% is quite poor, especially for a codebase as small as nbench. What happens on misses? Buckets of linked lists? Do you have any performance numbers for how deep they go when in the miss?

It might be worthwhile to cache last return address mapping outright, for rets. If you believe that the return target will be likely to be the same multiple times in a row enough for branch prediction to be helped, seems like you'd may as well take that extra step. I don't know where you'd store the mapping though. I'd possibly do something silly like self-modifying code, but that's just me.
 
Last edited by a moderator:
Exophase said:
It might be worthwhile to cache last return address mapping outright, for rets. If you believe that the return target will be likely to be the same multiple times in a row enough for branch prediction to be helped, seems like you'd may as well take that extra step. I don't know where you'd store the mapping though. I'd possibly do something silly like self-modifying code, but that's just me.
I don't see any realistic way to do it other than self-modifying code. Anything else will have to load the address from somewhere, so you might as well load it from the hash table.

By self-modifying code, I assume you mean something like the following:
Code:
 movw r1, #virt_addr_lo
 movt r1, #virt_addr_hi
 cmp  r0, r1
 beq  addr
 b    hash_table_lookup
and then modifying the addresses when the comparison fails. For subroutines that are called from only one place, the hit rate will be 100%. For subroutines called from multiple places, the hit rate will be low. It might be advantageous to eliminate this code after repeated misses, and use only the hash table.
 
Last edited by a moderator:
You should come on IRC, that'd help the discussion, though others would miss it (I wonder how many people are interested by this...).

Exophase said:
I was thinking that making it smaller was the goal, although in reality this probably makes it bigger. Besides that, size optimization is not what you should be going for in an IMR targeting dynamic recompilation. You compile one block then you're done with it. It's worth it spending extra bytes to make the thing more expressive rather than having to spread a lot of simple functionality over multiple instructions just to combine them back almost all of the time.
I meant smaller numbers of different TCG ops, not smaller memory usage. Making TCG more expressive certainly was not a design goal given its original usage and having a smaller number of TCG ops makes writing back-ends much easier. I agree in the restricted case here (x86->ARM), it's not that much important, but again things will change.

BTW when you write front-end code, you can use immediates with every TCG op, they just get split in two ops.

Okay, so you were talking about instructions remaining. What's the problem? 7 temporaries is a small amount??? What is the "also", the temporaries would be taken from that pool. If not ARM registers and not temporaries then what do you have in mind for these 7? Aside from stack pointer and environment pointer (those two can be merged, but it's probably not worth bothering with for you). QEMU doesn't even count cycles.
The x86 uses 3 temps for flag tracking; plus the context pointer; plus at least one temporary register. SP is needed too obviously. LR should be preserved, though this can be worked around where needed. That can fit. But as I wrote earlier, this will wait until I'm done with the basics :)

Those tiny blocks suck for this. I wonder what the typical block size will be for various programs..
I wonder the same. Note that QEMU stops code generation at 4 KB page boundaries, so there's a limit on the size of x86 blocks (the reason is that it helps flushing code when needed).

Yeah, I agree, inlining the branch to give them separate BTB entries is a good idea. By hit rate you mean the hashing right, not branch prediction. 70% is quite poor, especially for a codebase as small as nbench. What happens on misses? Buckets of linked lists? Do you have any performance numbers for how deep they go when in the miss?
Yes, linked lists. Might not be the best choice. DynamoRIO is using open hashing and the writer found it faster (though obviously he has to resize the hash table when it gets full). I will get back to all of that later, you've seen some code, and you'll agree it's not the most critical thing :)

It might be worthwhile to cache last return address mapping outright, for rets. If you believe that the return target will be likely to be the same multiple times in a row enough for branch prediction to be helped, seems like you'd may as well take that extra step. I don't know where you'd store the mapping though. I'd possibly do something silly like self-modifying code, but that's just me.
The cost of the syscall for the mandatory Dcache flushing + Icache invalidation for SMC might be a killer.

I agree that making educated guesses on the return address might be beneficial, but I intend on doing something slightly more aggressive by merging blocks in hot paths, with checks for exiting when the flow is not what was expected. IIRC I mentioned something similar in the Mupen64 thread.
 
Last edited by a moderator:
Laurent said:
You should come on IRC, that'd help the discussion, though others would miss it (I wonder how many people are interested by this...).

I'm interested - if you do IRC about this, i would be happy to idle on the channel, just tell where and when :p
 
Last edited by a moderator:
urjaman said:
Laurent said:
You should come on IRC, that'd help the discussion, though others would miss it (I wonder how many people are interested by this...).

I'm interested - if you do IRC about this, i would be happy to idle on the channel, just tell where and when :p
Ditto. This stuff is way over my head, as a lowly 3rd semester Computer Science student, but I've been reading this and the N64 thread with quite a lot of interest.
 
Last edited by a moderator:
Ari64 said:
I don't see any realistic way to do it other than self-modifying code. Anything else will have to load the address from somewhere, so you might as well load it from the hash table.

I don't entirely agree with this. Doing a single PC-relative load off the end of the block is much less work than performing hashing. It's hurting the cache, but not much more than the hash lookup - if rets are at all close to each other then a single cache line can end up pooling multiple return values. This is of course assuming a high enough hit rate to offset this additional cost.

Ari64 said:
By self-modifying code, I assume you mean something like the following:
Code:
    movw r1, #virt_addr_lo
    movt r1, #virt_addr_hi
    cmp  r0, r1
    beq  addr
    b    hash_table_lookup
and then modifying the addresses when the comparison fails. For subroutines that are called from only one place, the hit rate will be 100%. For subroutines called from multiple places, the hit rate will be low. It might be advantageous to eliminate this code after repeated misses, and use only the hash table.

Yes, the "patch after X traps" approach is a pretty common recompiler optimization.

Laurent said:
I meant smaller numbers of different TCG ops, not smaller memory usage. Making TCG more expressive certainly was not a design goal given its original usage and having a smaller number of TCG ops makes writing back-ends much easier. I agree in the restricted case here (x86->ARM), it's not that much important, but again things will change.

BTW when you write front-end code, you can use immediates with every TCG op, they just get split in two ops.

This is why I think a good design would be to have a IMR be very expressive but then capable of splitting down to an equivalent subset when such "conditioning" is selected. Either because the destination doesn't support the operations or the person writing the backend doesn't feel like supporting it. In this case I think it's still going way too far.

Laurent said:
The x86 uses 3 temps for flag tracking; plus the context pointer; plus at least one temporary register. SP is needed too obviously. LR should be preserved, though this can be worked around where needed. That can fit. But as I wrote earlier, this will wait until I'm done with the basics :)

If you can get the ARM flags to map to x86 flags then you just have to carry around the result for parity emulation. If you need to overwrite flags a lot then another register for them.

Laurent said:
I wonder the same. Note that QEMU stops code generation at 4 KB page boundaries, so there's a limit on the size of x86 blocks (the reason is that it helps flushing code when needed).

I doubt the page boundary is going to be putting a dent into anything except where it happens in the middle of a really tight loop.

Laurent said:
Yes, linked lists. Might not be the best choice. DynamoRIO is using open hashing and the writer found it faster (though obviously he has to resize the hash table when it gets full). I will get back to all of that later, you've seen some code, and you'll agree it's not the most critical thing :)

Open is more cache friendly, but it's really kind of a pain in the ass to implement. Right now, improving the hit rate is paramount.

Laurent said:
The cost of the syscall for the mandatory Dcache flushing + Icache invalidation for SMC might be a killer.

Would be a lot better if could get supervisor mode somehow. Probably not really worth putting a lot of thought into.

Laurent said:
I agree that making educated guesses on the return address might be beneficial, but I intend on doing something slightly more aggressive by merging blocks in hot paths, with checks for exiting when the flow is not what was expected. IIRC I mentioned something similar in the Mupen64 thread.

You really do seem to follow Dynamo's approach on a lot of things..
 
Last edited by a moderator:
Exophase said:
Ari64 said:
I don't see any realistic way to do it other than self-modifying code. Anything else will have to load the address from somewhere, so you might as well load it from the hash table.

I don't entirely agree with this. Doing a single PC-relative load off the end of the block is much less work than performing hashing. It's hurting the cache, but not much more than the hash lookup - if rets are at all close to each other then a single cache line can end up pooling multiple return values. This is of course assuming a high enough hit rate to offset this additional cost.
The hash algorithm that I use in mupen64plus is: hash_table[((vaddr>>16)^vaddr)&0xFFFF]

The lookup and comparison (in linkage_arm.s) is as follows:
Code:
jump_vaddr:
        eor     r2, r0, r0, lsl #16
        ldr     r1, .htptr
        lsr     r2, r2, #12
        bic     r2, r2, #15
        ldr     r2, [r1, r2]!
        teq     r2, r0
[...]
.htptr:
        .word   hash_table
Maybe you can think of an alternative that would use fewer instructions, but the hash isn't all that complicated.


Laurent said:
The cost of the syscall for the mandatory Dcache flushing + Icache invalidation for SMC might be a killer.

I agree that making educated guesses on the return address might be beneficial, but I intend on doing something slightly more aggressive by merging blocks in hot paths, with checks for exiting when the flow is not what was expected. IIRC I mentioned something similar in the Mupen64 thread.
I think the relevant discussion begins here: http://www.gp32x.de/board/index.php?/topic/49358-mupen64plus/page__view__findpost__p__751822

The reason I did not try to merge blocks in hot paths is that it makes block invalidation more complicated. If a write hits any one of the merged blocks, then you have to invalidate them all.

BTW I don't flush the cache every time it modifies an instruction. Since the new code is functionally equivalent to the old code, nothing really bad happens if the old code stays in the i-cache. It still works, it's just slower, but maybe not as slow as flushing the cache.
 
Last edited by a moderator:
Ari64 said:
Maybe you can think of an alternative that would use fewer instructions, but the hash isn't all that complicated.

Off the top of my head, it'd be:

ldr r1, .last_pc
cmp r1, r0
ldreq pc, .last_pc_translated
...

Yeah, I think I'm going to call this one the winner. You can shove other block exiting stuff after the ldr to hide the load-use.
 
Last edited by a moderator:
Ari64 said:
I don't see any realistic way to do it other than self-modifying code. Anything else will have to load the address from somewhere, so you might as well load it from the hash table.

By self-modifying code, I assume you mean something like the following:
Code:
 movw r1, #virt_addr_lo
 movt r1, #virt_addr_hi
 cmp  r0, r1
 beq  addr
 b    hash_table_lookup
Exophase said:
ldr r1, .last_pc
cmp r1, r0
ldreq pc, .last_pc_translated
...
Yeah, you can do it without self-modifying code. Having profiled this sort of thing, I know that such code will result in a significant number of L1 cache misses.

I was thinking more along the lines of replacing the hash table with something else entirely, eg a call/ret stack. I'm not sure that would be any better though.
 
Last edited by a moderator:
Back
Top