Mupen64Plus


Nation.A.List said:
And the result was................... (the suspense is killing me!)
Around 5% improvement.
 
Last edited by a moderator:
Laurent said:
Note the code is still suboptimal for "standard" constant generation, but the point is to show movw/movt usage :)

I think you could give this a try: from a memory point of view the consumption is identical (1 instr + 1 word vs 2 instr). From the memory subsystem point of view, the movw/movt has the advantage of potentially less D/I caches and TLB thrashing. From a ld-use penalty, I can't say I don't know A8 well enough.
I guess I can try it. It looks like movt/movw is encoded as cmp/tst with the s bit clear. ARM's documentation on this stuff totally sucks. :(
 
Last edited by a moderator:
Ari64 said:
I guess I can try it. It looks like movt/movw is encoded as cmp/tst with the s bit clear. ARM's documentation on this stuff totally sucks. :(
You have DDI 406? If not, then do your request here: http://silver.arm.com/browse/AR570
I agree it takes some time to get used to it, and having a PDF reader that can search for strings very fast is kind of mandatory :)

Code:
MOVW
3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
cond   |0 0 1 1 0 0 0 0|imm4   |Rd     |imm12                    MOVW
cond   |0 0 1 1 0 1 0 0|imm4   |Rd     |imm12                    MOVT

MOVW stores imm4:imm12 into lower bits of Rd and clears the upper 16 bits.
MOVT stores imm4:imm12 into higher bits of Rd and doesn't touch the lower 16 bits.
 
Last edited by a moderator:
Laurent said:
Another comment: why doing strcpy in new_recompile_block instead of plain assignment?
It's for the debugger. I guess I could nullify this when not compiling with debugging.
 
Last edited by a moderator:
Ari64 said:
It's for the debugger. I guess I could nullify this when not compiling with debugging.
That's not the point: you could simply do:
Code:
 const char *insn[MAXBLOCK];
...
insn[i] = "SLL";
That'll be much faster and you won't lose anything :)
Even better: define a macro that you'd have two implementations depending on whether you want to use the debugger:
Code:
#ifdef USE_DISASS
#define DIS(x,s)  insn[x] = s
#else
#define DIS(x,s)  do ; while (0)
#endif
...
DIS(i, "SLL");
 
Last edited by a moderator:
Laurent said:
These are Thumb2 only instructions.

Ahem...

Exophase said:
In terms of functionality per instruction I believe Thumb-2 to be the preferred ISA. I'd rather have the extended immediates, movw, movt, wide add/subtract, the bit field operations, compare + branch, and the other less interesting instructions than predicate per instruction.

Laurent said:
BTW almost all the T2 instructions you describe are in ARM instruction set (the only missing ones being IT and compare and branch [which is so CISCy that its implementation is probably bad on most advanced cores]).

Looks like you were lying about that.

(EDIT: Don't take me very seriously on that, just in case.. although I am rather disappointed about the lack of the instructions ;))
 
Last edited by a moderator:
Ari64 said:
I guess I can try it. It looks like movt/movw is encoded as cmp/tst with the s bit clear. ARM's documentation on this stuff totally sucks. :(

A lot of instructions ARMv5+ are encoded in that particular space (what looks like the test instructions with S clear)
 
Last edited by a moderator:
Heh you caught me lying defending my point of view :p In fact I keep on forgetting these instructions in T2, which tells how much I care about T2...
 
With all your Thumb-2 hate I'm liable to start encouraging Ari64 to rewrite his recompiler to Thumb-2 ;D

Don't worry, I'm sure he'd never do it.

But if the branch prediction was more designed with Thumb-2 friendly it probably really would perform measurably better. Especially for recompiled code that tends to be less nice to icache than normal code. And if those instructions ever come in handy. I don't know how much N64 games use larger than rotated 8bit but within 12bit immediate add/subtracts and offsets in their address generation.
 
It's basically impossible to make bpred better for T2 than for ARM, without using significantly more silicon for obvious reasons (e.g., alignment less constrained, higher possible density of branches). So again I'd prefer that silicon to be spent and more productive things :)

And also don't forget that most 16-bit Thumb instructions touch the flags, which can be a problem for translated code.

As far as address generation goes, MIPS offsets being 15+1 bits while ARM's being 12+1 bits, you'll always need one instruction + one load in the worst case:
Code:
lw   $1,0x7fff($2)
->
movw r0,#0x7fff
ldr  r1,[r2+r0]
But perhaps were you talking about some other form of address generation?
 
Laurent said:
It's basically impossible to make bpred better for T2 than for ARM, without using significantly more silicon for obvious reasons (e.g., alignment less constrained, higher possible density of branches). So again I'd prefer that silicon to be spent and more productive things :)

I don't want BETTER branch prediction for Thumb-2 than ARM, I just want not quite as worse. However, my guess is that there won't really be better branch prediction for Thumb-2 because the ISA is most important on size constrained embedded platforms where hard-realtime is important and branch mispredicts/long pipelines aren't really acceptable.

Laurent said:
As far as address generation goes, MIPS offsets being 15+1 bits while ARM's being 12+1 bits, you'll always need one instruction + one load in the worst case:

I'm well aware of this, which my post should have made obvious. +/-12bit will almost definitely cover substantially more cases than the standard ror8 ARM format, especially for load offsets. "Always, in the worst case" is kind of a funny thing to say.
 
Last edited by a moderator:
Exophase said:
Laurent said:
As far as address generation goes, MIPS offsets being 15+1 bits while ARM's being 12+1 bits, you'll always need one instruction + one load in the worst case:

I'm well aware of this, which my post should have made obvious. +/-12bit will almost definitely cover substantially more cases than the standard ror8 ARM format, especially for load offsets. "Always, in the worst case" is kind of a funny thing to say.
What I meant is that no matter whether you have add/sub 16-bit, you might have to use 1 more instruction and movw is enough. Does this sound better to you? :)
 
Last edited by a moderator:
Laurent said:
What I meant is that no matter whether you have add/sub 16-bit, you might have to use 1 more instruction and movw is enough. Does this sound better to you? :)

The movw by itself won't work so long as he's range-checking the final address such that the address generation must happen before the load. But even then it doesn't gain you anything because the address itself can be modified with an add/sub for the top 8bits then an address offset for the bottom. Two adds/subtracts wins over a movw and an add no matter what because it uses less registers and may be more parallelizable.
 
Last edited by a moderator:
Exophase said:
Laurent said:
What I meant is that no matter whether you have add/sub 16-bit, you might have to use 1 more instruction and movw is enough. Does this sound better to you? :)

The movw by itself won't work so long as he's range-checking the final address such that the address generation must happen before the load. But even then it doesn't gain you anything because the address itself can be modified with an add/sub for the top 8bits then an address offset for the bottom. Two adds/subtracts wins over a movw and an add no matter what because it uses less registers and may be more parallelizable.

ADD/SUB/ORI/XORI which do not fit within a shifted 8-bit immediate generate two sequential instructions. ANDI generates a combination of UXTH and/or BIC if possible, otherwise uses a temporary register. The other case where we need a temporary register is SLTI(U), since two sequential subtracts will not properly set the carry/overflow flags.

LUI, and arithmetic/logic operations with r0 as a source, go into the constant propagation queue, and are filled in later when we determine what the final value is.

For address generation, the offset is usually aligned, so 12-bit immediates aren't a lot better than shifted 8-bit immediates.
 
Last edited by a moderator:
Ari64 said:
ADD/SUB/ORI/XORI which do not fit within a shifted 8-bit immediate generate two sequential instructions. ANDI generates a combination of UXTH and/or BIC if possible, otherwise uses a temporary register. The other case where we need a temporary register is SLTI(U), since two sequential subtracts will not properly set the carry/overflow flags.

Yes, so movw/movt aren't as useful as they sound. Except for ori, where movw does the same thing.

Ari64 said:
LUI, and arithmetic/logic operations with r0 as a source, go into the constant propagation queue, and are filled in later when we determine what the final value is.

.. and movt is useful for the lui, although the constant propagation would still be good for detecting constant address access.

Ari64 said:
For address generation, the offset is usually aligned, so 12-bit immediates aren't a lot better than shifted 8-bit immediates.

Okay, so you often get an effective 10-bits vs an effective (almost) 13-bits. I see what you mean though.
 
Last edited by a moderator:
Exophase said:
Yes, so movw/movt aren't as useful as they sound. Except for ori, where movw does the same thing.
It's not useful for ori, but might be useful for andi. For example,
Code:
ANDI r1,r1,0xFF
-> and r1,r1,#0xFF

ANDI r1,r1,0xFFFF
-> uxth r1,r1

ANDI r1,r1,0xFFF
-> uxth r1,r1
-> bic r1,r1,#0xF000

ANDI r1,r1,0xAAAA
-> movw r14,#0xAAAA
-> and r1,r1,r14
The last case is pretty uncommon though. Most uses of ANDI extract single bits or clear the upper bits.

BTW the code generally doesn't return (except when you quit) so it can overwrite r14 and use it as a temporary register.
 
Last edited by a moderator:
Oops, I thought movw was the ORing one and movt was the clearing one, like on MIPS. Looks like it's the other way around.

I use r14 as a temporary too, but if you can use it for short term register cache that's even better. Although in your case I don't think you call functions from the code that much either.
 
Exophase said:
I use r14 as a temporary too, but if you can use it for short term register cache that's even better. Although in your case I don't think you call functions from the code that much either.
It uses 12 registers for the register cache, which is more than enough. In fact, it keeps so much stuff in registers that it's causing a significant number of loads/stores at branches, and I've actually seen a reduction in code size by freeing registers more aggressively. I haven't investigated whether such a strategy would result in improved performance though.

Function calls occur for loads/stores that do not go to main memory, DMULT, DDIV, and many floating point operations.
 
Last edited by a moderator:
Ari64 said:
Function calls occur for loads/stores that do not go to main memory, DMULT, DDIV, and many floating point operations.

If 12 registers works so well then maybe doing a partially static scheme would work better? I'd be curious to see data on which registers are used most. Of course, any kind of more global allocation could help.

I think most of those operations aren't that frequent, if float ops are really only 2-4% of the code. Are you still using soft floats for everything, or are you only using them for double precision/unsupported operations? StrmnNrmn claimed that games he tried were okay with having their doubles turned to floats.
 
Last edited by a moderator:
Back
Top