Beta Mupen64Plus


God Ginrai said:
Interesting. So, the emulation of the games are being slowed down only by graphics rendering? o_ô

-God Ginrai

One of the big things that's taking up time in emulating graphics is geometry transformation and lighting, rather than the rasterization that the SGX performs. This is performed by high level emulation but accomplishes the same task as the RSP vector processor on an N64. You can see this being performed in gSP.cpp in gles2n64.

The issue I have with the way things are being done right now is that all of the vector transforms and dot products, which most likely dominate computation for geometry, are being serialized through some functions. This does nothing to hide the high latency of NEON vector multiply + add operations, making these operations take dozens of cycles instead of a few. But if you look at what gSP.cpp is doing, the vectors to be transformed are being batched in at least some capacity. So the inner transform routines should really be batched and NEON-ized - you will ideally want batches of at least 8 or so at a time to maximize performance. You'll also want to hoist a lot of stuff out of the inner loop of the vector transform. Where loop multiplexing stops being practical things should be broken into multiple passes instead. Finally, I recommend removing the clipping entirely if at all possible and having the SGX do it, but where not possible at least NEONize more of the bounds checking.

I could be way off base but I think there could be a lot of performance to be gained on the GPU plugin side of things, far more than can be gained from any improvements to CPU emulation. But I'm just an onlooker on this right now, and would like to know what Adventus thinks.
 
Last edited by a moderator:
Exophase said:
God Ginrai said:
Interesting. So, the emulation of the games are being slowed down only by graphics rendering? o_ô

-God Ginrai

One of the big things that's taking up time in emulating graphics is geometry transformation and lighting, rather than the rasterization that the SGX performs. This is performed by high level emulation but accomplishes the same task as the RSP vector processor on an N64. You can see this being performed in gSP.cpp in gles2n64.

The issue I have with the way things are being done right now is that all of the vector transforms and dot products, which most likely dominate computation for geometry, are being serialized through some functions. This does nothing to hide the high latency of NEON vector multiply + add operations, making these operations take dozens of cycles instead of a few. But if you look at what gSP.cpp is doing, the vectors to be transformed are being batched in at least some capacity. So the inner transform routines should really be batched and NEON-ized - you will ideally want batches of at least 8 or so at a time to maximize performance. You'll also want to hoist a lot of stuff out of the inner loop of the vector transform. Where loop multiplexing stops being practical things should be broken into multiple passes instead. Finally, I recommend removing the clipping entirely if at all possible and having the SGX do it, but where not possible at least NEONize more of the bounds checking.

I could be way off base but I think there could be a lot of performance to be gained on the GPU plugin side of things, far more than can be gained from any improvements to CPU emulation. But I'm just an onlooker on this right now, and would like to know what Adventus thinks.

Well, that sounds very promising. I look forward to the future of this emulator. ^_^

-God Ginrai
 
Last edited by a moderator:
@Pickel, as for the escape problem, would it be possible to add a very basic key-listener thread to the main to just listen for escape, and then make it sleep for half a second? That way, it doesn't matter which input plugin is being used, and it shouldn't eat up too much cpu.

@God Ginrai - I'm not too concerned about that part right now. It's annoying to need to pop the battery out, but it doesn't really get in the way of normal usage for me. Though it makes it harder to get into the power-save modes. Overall, not that big of a deal. The biggest problem that I have with my unit is my lack of time available to play with it. :p I was getting dirty looks from my fiancee on the train a little bit ago because I was ignoring her and playing with the pandora.
 
Also I forgot, it'd be good if NEON could be used for all the fixed to float conversions being done. I shudder to think to think what GCC is doing for that.
 
Is it my imagination, or does the frameskip option in the config file not do anything at all?
 
I haven't looked at this stuff in a while, so i'm a bit fuzzy at the moment.

Finally, I recommend removing the clipping entirely if at all possible and having the SGX do it, but where not possible at least NEONize more of the bounds checking.
You can disable software clipping with "enable clipping" in the config file. Some games do run slightly faster.

One of the big things that's taking up time in emulating graphics is geometry transformation and lighting, rather than the rasterization that the SGX performs. This is performed by high level emulation but accomplishes the same task as the RSP vector processor on an N64. You can see this being performed in gSP.cpp in gles2n64.
I don't think RSP transformations are a significant bottleneck in the plugin. Heres me profiling LoZ:OOT over 10 sec:

Code:
F3DEX2_VTX x 27174 = 392 ms (8.07%)
F3DEX2_CULLDL x 1457 = 6 ms (0.12%)
F3DEX2_TR1 x 4912 = 513 ms (10.56%)
F3DEX2_TR2 x 81020 = 2339 ms (48.17%)
F3DEX2_QUAD x 222 = 32 ms (0.66%)
F3DEX2_TEXTURE x 12578 = 47 ms (0.97%)
F3DEX2_POPMTX x 783 = 2 ms (0.04%)
F3DEX2_GEOMETRYMODE x 19453 = 55 ms (1.13%)
F3DEX2_MTX x 16009 = 72 ms (1.48%)
F3DEX2_MOVEWORD x 11383 = 37 ms (0.76%)
F3DEX2_MOVEMEM x 10814 = 37 ms (0.76%)
F3DEX2_LOAD_UCODE x 19560 = 48 ms (0.99%)
F3DEX2_ENDDL x 19042 = 57 ms (1.17%)
F3DEX2_SETOTHERMODE_L x 8605 = 16 ms (0.33%)
F3DEX2_SETOTHERMODE_H x 10449 = 33 ms (0.68%)
G_TEXRECT x 1480 = 199 ms (4.10%)
G_RDPLOADSYNC x 23780 = 65 ms (1.34%)
G_RDPPIPESYNC x 54867 = 132 ms (2.72%)
G_RDPTILESYNC x 3921 = 10 ms (0.21%)
G_SETSCISSOR x 888 = 3 ms (0.06%)
G_RDPSETOTHERMODE x 3237 = 4 ms (0.08%)
G_LOADTLUT x 3312 = 65 ms (1.34%)
G_SETTILESIZE x 24635 = 79 ms (1.63%)
G_LOADBLOCK x 20394 = 310 ms (6.38%)
G_LOADTILE x 74 = 1 ms (0.02%)
G_SETTILE x 44895 = 115 ms (2.37%)
G_FILLRECT x 222 = 18 ms (0.37%)
G_SETFILLCOLOR x 148 = 1 ms (0.02%)
G_SETPRIMCOLOR x 15648 = 44 ms (0.91%)
G_SETENVCOLOR x 6738 = 19 ms (0.39%)
G_SETCOMBINE x 13491 = 38 ms (0.78%)
G_SETTIMG x 23780 = 61 ms (1.26%)
G_SETZIMG x 222 = 2 ms (0.04%)
G_SETCIMG x 444 = 4 ms (0.08%)
TOTAL TIME = 4856 ms
The F3DEX2_VTX function calls gSPVertex() which does all the RSP processing of the vertices, I suppose 8% isn't insignificant but i don't think massive gains will be made with more NEON optimisation. When i vectorised it to process 4x vertices at a time I didn't see a large gain (you can check this by not defining __VEC4_OPT). Not that I won't try some more optimisation of those routines but don't expect huge overall gains. By far the most amount of time is spent in the F3DEX2_TR1/2 functions which simply adds vertex indices to the index buffer then draws them when necessary.... The majority of time is spent in the gles drivers glDrawArrays() function.

All the evidence i've gathered suggests a high correlation between the number of shader changes and the performance, lots of games aren't very complex graphically but just programmed badly so they're swapping between combiner modes alot which i guess was essentially free on the N64. One idea that i'm working on is trying to minimise the number of shaders i produce, for instance I make (B-C)+D = A*(B-C)+D but just set A=1. This means that if A*(D-C)+B is needed at some point i wouldn't need to swap shaders i could just swap D and B values and set A to the appropriate value. Clearly this only works with uniform position changes, if a texture sample location is changed i'll need a new shader.

PS: I would love to get rid of those *SYNC functions without cost, they literally do nothing (only some games use them), Maybe a preprocessing step would be worthwhile.

Is it my imagination, or does the frameskip option in the config file not do anything at all?
Apart from frameskip 0 == 1, if the other settings are working it should be working. Whether you'll see a performance gain depends on where the bottleneck is (gles2n64 or mupen).
 
Okay, so are you really draw call limited? Can uber shaders and triangle batching with degenerates help? What about changing the sort order, assuming a way of detecting translucent primitives? Really sucks that most of the expense is due to stuff happening in the driver instead of real work.

Is there a reason software clipping is made a configurable option? Is it to reduce number of polygons sent to the SGX?
 
Okay, so are you really draw call limited? Can uber shaders and triangle batching with degenerates help? What about changing the sort order, assuming a way of detecting translucent primitives?
The raw number of draw calls aren't too bad, they typically hover in the 200-300 per frame range. Uber shaders could possibly be better but I haven't seen good SGX performance with branching in a fragment shader (even if the branch is essentially always taken). I have thought about reordering the geometry, but it would be very difficult. For a start it would require actually batching the vertices, currently they're stored 20-80 at a time due to N64 memory restrictions. I suppose you could guess if a primitive is translucent by looking at its alpha combiner, but it wouldn't be that great, most of them use a=tex0.a*uEnvColor.a whereas you would have to check if the texture has an alpha channel.... and if uEnvColor.a is 1....

Really sucks that most of the expense is due to stuff happening in the driver instead of real work.
Yes, Yes it does. I vainly hold onto the hope that IMGtech will release an awesome driver that fixes all my problems :) .

Is there a reason software clipping is made a configurable option? Is it to reduce number of polygons sent to the SGX?
Its mainly just left there from me debugging stuff, but I saw no reason to pull it out. Yeah enabling it does reduce the number of polygons sent, but at the cost of more CPU.
 
Adventus said:
The raw number of draw calls aren't too bad, they typically hover in the 200-300 per frame range. Uber shaders could possibly be better but I haven't seen good SGX performance with branching in a fragment shader (even if the branch is essentially always taken). I have thought about reordering the geometry, but it would be very difficult. For a start it would require actually batching the vertices, currently they're stored 20-80 at a time due to N64 memory restrictions. I suppose you could guess if a primitive is translucent by looking at its alpha combiner, but it wouldn't be that great, most of them use a=tex0.a*uEnvColor.a whereas you would have to check if the texture has an alpha channel.... and if uEnvColor.a is 1....

In fairness, since you're doing texture caching anyway that alpha check would be a cheap addition ;) N64's combiners seem like kinda a PITA..

Adventus said:
Yes, Yes it does. I vainly hold onto the hope that IMGtech will release an awesome driver that fixes all my problems :) .

Haha, I'm sure you're not the only one hoping that. It seems like benchmark performance in lots of mobile devices falls far far below advertised.. guess that probably goes for most things.

Adventus said:
Its mainly just left there from me debugging stuff, but I saw no reason to pull it out. Yeah enabling it does reduce the number of polygons sent, but at the cost of more CPU.

I wonder how much games rely on this for basic view culling. I dunno how things tend to work out, but if it's better having software clipping off across the board it seems like you should just remove it or make it the default at least. Or maybe do the simple culling cases in software but no clipping.
 
Last edited by a moderator:
I wonder how much games rely on this for basic view culling. I dunno how things tend to work out, but if it's better having software clipping off across the board it seems like you should just remove it or make it the default at least. Or maybe do the simple culling cases in software but no clipping.
Oh i was getting confused with clipping and culling, there is no software clipping (only culling), gln64 doesn't even bother to use clip planes. I guess its not necessary for most games. There is scissoring though, its handled by gles2.
 
How much free RAM do we have?

How about allocating the remaining RAM for a huge vertex/index cache? One can then turn more often used models into VBOs and IBOs with degenerated triangle strips.
 
How much free RAM do we have?

How about allocating the remaining RAM for a huge vertex/index cache? One can then turn more often used models into VBOs and IBOs with degenerated triangle strips.
I'm guessing there's a fair bit of free RAM. The problem is i would still need to pass these vertices through the CPU based RSP processing step to do all the matrix transformations + lighting, so VBO wouldn't be an option. Theres also the issue of identifying models, i guess a hash of the RDRAM location + No. of vertices (and maybe a few random vertices X position) would work but I wouldn't be surprised if it wasn't very robust, the CPU probably messes with vertices regularly for animations and stuff. The real advantage of this would be that i could reorder the models, but then i would have the translucency problem and i would also need to store a lot of meta data with the models (ie the OGL rendering state). I think if i was going to reorder the models, i would initially for simplicities sake just overload the glDrawArrays function to capture all the vertices and rendering state. So it would basically be a some what generic backend optimizer.
 
Is glDrawArrays() calling the SGX driver, or is it part of the GL to ES wrapper?

i.e. is this performance hit due to shoddy SGX 530 drivers? I remember discussion last year about binary-only drivers with sketchy performance - is it still the same? It seems nuts that 60% of the time is spent drawing?
 
silver said:
Is glDrawArrays() calling the SGX driver, or is it part of the GL to ES wrapper?
There isn't any GL to ES wrapper anymore. I just use that wrapper when i did POC ports.

i.e. is this performance hit due to shoddy SGX 530 drivers? I remember discussion last year about binary-only drivers with sketchy performance - is it still the same? It seems nuts that 60% of the time is spent drawing?
~50% of time is spent in the Graphics Plugin and 60% of this time is spent in the driver, so its 30% of actual time. I don't think its particularly unusual driver performance, it's just that I'm using it in a really non optimal way. The LOZ scene i benchmarked before has ~300 draw calls per frame and ~80 shader changes per frame.
 
Last edited by a moderator:
Adventus said:
silver said:
Is glDrawArrays() calling the SGX driver, or is it part of the GL to ES wrapper?
There isn't any GL to ES wrapper anymore. I just use that wrapper when i did POC ports.

Aha, thanks.

i.e. is this performance hit due to shoddy SGX 530 drivers? I remember discussion last year about binary-only drivers with sketchy performance - is it still the same? It seems nuts that 60% of the time is spent drawing?
~50% of time is spent in the Graphics Plugin and 60% of this time is spent in the driver, so its 30% of actual time. I don't think its particularly unusual driver performance, it's just that I'm using it in a really non optimal way. The LOZ scene i benchmarked before has ~300 draw calls per frame and ~80 shader changes per frame.

Ok 30% not so bad, although I am less expert than most on N64 Graphics setup.

RE your performance comment earlier on - I'm sure years ago one of the gfx plugins (possibly gln64?) used to use a lookup table for the combiner modes rather than translate them. Would this assist in the pandoras case? Or is this wide of the mark (I suspect it is)
 
Last edited by a moderator:
Please shoot this idea down, as it is terrible but I had to get it out anyway.
Could you not come up with some kind of polygon skip feature? Instead of skipping frames, it skips polygon draw calls. If you skip every 5th triangle for example, you'd see a potential 20% increase in rendering. You'd be missing every 5th triangle which could make things weird, but the speed boost might be worth it.
Like I say, I'm pretty sure this is a bad idea (for reasons other than ugly objects with missing triangles) but I can't quite think of why.
 
I'm curious to know if the frameskip skips the total rendering of a frame, or just the final copy to screen part?
 
RE your performance comment earlier on - I'm sure years ago one of the gfx plugins (possibly gln64?) used to use a lookup table for the combiner modes rather than translate them. Would this assist in the pandoras case? Or is this wide of the mark (I suspect it is)
I don't 100% understand what you mean by this, i assume you mean it maps the combiners to a smaller subset of blendmodes. This is what the PSP emulator does, except it sometimes renders things in 2 or more passes to get it approximately right (multiple passes would be a total waste of time with the Pandora). I could see a decent increase in performance, not because of the decreased shader complexity but because I would require less shaders (and therefore probably less shader changes). It would probably only work with a small subset of games (or it would require specific hacks for lots of games).

Mapping the combiners to a smaller subset of shaders without losing accuracy is what i'm working on now.

I'm curious to know if the frameskip skips the total rendering of a frame, or just the final copy to screen part?
It skips the processing of every nth display list. Most games (all?) only submit one display list per frame, so it should be skipping the total rendering of a frame.

Could you not come up with some kind of polygon skip feature? Instead of skipping frames, it skips polygon draw calls. If you skip every 5th triangle for example, you'd see a potential 20% increase in rendering. You'd be missing every 5th triangle which could make things weird, but the speed boost might be worth it.Like I say, I'm pretty sure this is a bad idea (for reasons other than ugly objects with missing triangles) but I can't quite think of why.
Its not so much the amount of geometry that is a problem, its the number of shader changes in between. It would be better if I could skip all rendering with a single combiner type, say if it was only used for non gameplay necessary effects. Again this would require game specific hacks. I'll probably have to bite the bullet and implement a game specific config system for other reasons (I'll base it on the daedalus format since it already contains masses of hacks), so its feasible.

Anyway, once i get the new OS working on my prototype, I'll working on implementing X11 support and a GTK config menu for gles2n64 so we can have a decent release in a few weeks.
 
craigix said:
I'm curious to know if the frameskip skips the total rendering of a frame, or just the final copy to screen part?

There is no copy to screen part with something rendered with OGL ES 2, or if there is the drivers are not very well done at all. Should be rendered straight to a flipped framebuffer.

Adventus; Another advantage to rolling alpha checking into the texture caching and performing render state sorting is that you can separate out polygons that can fail alpha test. This way you can have an alternate shader set and draw calls for items with discard, like IMG recommends. Of course, this doubles the shader changes, but I expect that after state sorting this number will be much smaller. I would be curious to see just how many unique shaders are present among the 80 shader changes you've recorded for OoT. Granted, removing discards just lightens SGX load which usually isn't a problem, but you did mention Banjo Kazooie being render limited.

Maybe if you have shader sorting you can also dynamically balance against your more generic shaders. Alpha polygons could perhaps go through a more uber-shader, although that'd be trading per-polygon cost on CPU for a per-pixel cost on GPU, and alpha polygons are probably pretty big but small in number (thinking of say, underwater areas in SM64 for instance)
 
Last edited by a moderator:
Back
Top