Pandora Odamex 0.7 32 bits renderer with NEON optimizations ?


Magic Sam

Forever Homebrew
Joined
Aug 10, 2007
Messages
2,544
Age
42
Location
Yuzu onsen
Hi all :)

As you may have already noticed, I've been working on an Odamex port these past few weeks:

* Release thread:

https://pyra-handheld.com/boards/threads/beta-odamex-0-7.76607/

* Package on the Repo:

http://repo.openpandora.org/?page=detail&app=odamex-magicsam

One of Odamex most interesting feature is its 32 bits renderer. On the one hand, it's really looking good on the Pandora ! But on the other hand, it's slightly slower than the default renderer (8 bits ? )... Nothing terrible here mind you, the game remains completely playable, but one can definitely feel that the Pandora is struggling a bit more, especially when picking up items for example, or when wearing the biohazard suit.

If I'm not mistaken, the authors have already optimized their renderer for the PPC architecture with Altivec instructions, and for the x86 architecture with MMX / SSE2 instructions. Please have a look at the rdrawt*.cpp files on SVN for more details:

So my question is: how hard would it be to optimize that renderer for ARM with NEON instructions ? It is just a matter of replacing the Altivec / MMX / SSE2 instructions with their corresponding NEON counterparts ? Or is it much more difficult than that ?

Cheers and thanks,

Magic Sam
 
Hi all :)So my question is: how hard would it be to optimize that renderer for ARM with NEON instructions ? It is just a matter of replacing the Altivec / MMX / SSE2 instructions with their corresponding NEON counterparts ? Or is it much more difficult than that ?

Normally I'd say it's more difficult than that, because there are other mismatches between the ISAs that go beyond operation types. But in this case they use intrinsics anyway, which hides a lot of that.

It doesn't look like there are an awful lot of functions that have optimized AltiVec, MMX, or SSE2 versions anyway. It'd probably be best to look at a profiler to make sure that those functions are really the ones taking up a lot of time before doing NEON versions.
 
Hi all :)

@Exophase : sorry for my late reply !

Normally I'd say it's more difficult than that, because there are other mismatches between the ISAs that go beyond operation types. But in this case they use intrinsics anyway, which hides a lot of that.

It doesn't look like there are an awful lot of functions that have optimized AltiVec, MMX, or SSE2 versions anyway. It'd probably be best to look at a profiler to make sure that those functions are really the ones taking up a lot of time before doing NEON versions.

OK, I'll try to profile the application first. Any idea how to do this properly (i.e I'm completely new at profiling things :p) ?

Cheers, Magic Sam
 
Hi all :)

@Exophase : would compiling Odamex with debugging symbols, then running the resulting binary through gdb, be sufficient to know where it spends the most time ?

Cheers, Magic Sam
 
gdb's not a profiler, and won't report in which functions the executable spends most of its time. For profilers I've heard of valgrind (or specifically callgrind which plugs into valgrind) and sysprof, but I've not actually used either myself yet, so I can't suggest what's best, especially on Pandora.
 
A quick way to have a look is launching a "sudo perf top" in a ssh terminal while the game is runing. With the debug info, you'll have, in real time, which function is taking the more time. That can help pinpoint bottleneck.

For more serious profiling, use gprof.
Compile your software with "-pg" flag
launch it and make a gameplay, it will generate a profiler file (a "gmon.out")
then use "gprof odamex gmon.out > profile.txt" the have the detailled output.
 
perf is what I use as well. I take it that comes preinstalled on Pandora's OS these days? I think I had to get mine from notaz.
 
Hi all :)

Thank you guys for the tips ! I'll give "sudo perf top" and that "-pg" flag a go and post the results here ASAP.

Cheers, Magic Sam
[doublepost=1454515079,1454508272][/doublepost]
perf is what I use as well. I take it that comes preinstalled on Pandora's OS these days? I think I had to get mine from notaz.
Hi @Exophase :)
Perf doesn't seem to be installed by default on the latest Super Zaxxon release, and "opkg search perf" returned nothing.
I'll try to get a copy of the Linux kernel we use, and compile it from there.
EDIT: silly me... It's in ptitSeb's Codeblocks... :$

Cheers, Magic Sam
 
Last edited:
Perf doesn't seem to be installed by default on the latest Super Zaxxon release, and "opkg search perf" returned nothing.

angstrom generally you find packages like "sudo opkg list perf*" (Wild card may not be needed)

It seems to install fine on my Pandora.
 
angstrom generally you find packages like "sudo opkg list perf*" (Wild card may not be needed)

It seems to install fine on my Pandora.
Silly me (again) :$ Looks like I'm too used to "apt-cache search" and "yum search" :D Thanks TrashyMG !

I just ran "sudo perf top" (from ptitSeb) while Odamex was running (with 32 bits renderer ON), and the winners are (percentage did vary during gameplay) :

21.01% odamex [.] R_DrawColumnD()
13.50% libc-2.9.so [.] memcpy
13.13% libSDL-1.2.so.0 [.] Blit_RGB888_RGB565

N.B: "r_dimpatchD_c (DCanvas const*, u" topped them all, but only for a brief moment.

These functions look like the ones optimized for Altivec, MMX and SSE2 (see my first post). Do you think it'd be worth to optimize them for ARM / NEON ?

Cheers, Magic Sam
 
You should post more profiling data, at least the top 15-20 or so functions.

These are the functions that currently have SSE2 optimizations in the codebase:

rtv_lucent4cols_c, R_DrawSpanD_c, R_DrawSlopeSpanD_c, r_dimpatchD_c

If you identify those in profile you can get an idea of what the maximum performance improvement could be for porting the existing SSE2 code to see. But since they're all < 13.13% it's not really looking good.

There's a chance that you'd get more out of optimizing R_DrawColumnD. I'm trying to figure out exactly what it actually does and it's hard because the code is so heavily abstracted.

I THINK the critical loop looks like this:

Code:
*dest = dcol.colormap.m_shademap[dcol.source[(frac >> FRACBITS) & mask]];
dest += pitch; frac += fracstep;

In which case, there isn't really that much you can do to optimize it. You could separate out the generation of the indexes to source (the shift, add, and AND) but you'd still be stuck with the texture and colormap lookups which you can't vectorize. And you'd end up adding another load to get the calculated index.
 
I only took a brief look at the source, but if I'm reading it correctly (and you didn't change it) then the content of the screen is rendered into a 32-bit surface, which is then blitted into 32-bit video surface (by SDL using mempcy) which is then converted to 16-bits (by SDL using Blit_RGB888_RGB565).
If that's true, then you gain a bit speed by using 16-bit video surface - the screen would be first converted to 16-bits and than copied (half amount of data than before).

I don't know if using SDL by notaz would help here - maybe less copying/faster drawing to screen ?

Neon version of Blit_RGB888_RGB565 might give you some more speed (it's not neon optimized in SDL by notaz) - either in SDL or by using your own version instead of blitting to video surface.
 
The other option is to actually use a 32bpp screen surface in SDL. Assuming that's an option now. That'd probably be faster if it avoids the conversion in the blit.
 
Back
Top