Pandora improved SDL for pandora


Bug report:

The NEON blit code can cause segfaults. This happens when it is working on the very last line of a target surface. I think there are some border cases in which it touches bytes beyond the surface boundary. If you want to, I can try to construct a minimal program that exhibits the bug. I was experiencing segfaults in NubNub which seemed to be caused by blitting stuff; I'm currently avoiding these crashes by clipping the target surface so the last line is not blitted to, but that's of course not a very good solution. It does not happen if you blit in a well-aligned way (e.g. a full screen blit), but when the blit starts on an odd x-position on the target surface, it may read more memory than it should and segfaults can happen.

Feature request:

I think there is some potential to optimize SDL_FillRect(). I'm not sure, but it looks like its implementation is generic (I can only find x86 asm optimizations for it, no NEON), so it could be a good idea to make a NEONized version of SDL_FillRect(), which could probably be quite a bit faster than the generic implementation. You can use NubNub as an example, since SDL_FillRect() is in the perf top three bottlenecks for NubNub.

It makes sense to optimize SDL_FillRect() since I suspect that many SDL applications use it quite a lot to initialize surfaces and draw boxes.
 
I'm trying to construct a test case. It seems to be harder to reproduce the problem than I originally thought (I guess not all invalid reads that could theoretically cause a problem always do cause a problem). I'll come back to you when I have a clear example that reliably segfaults.

As for SDL_FillRect(): if you want, I can try to come up with a first draft of the NEON code for it. That might save you some time. It's of course not an urgent thing to do, the current code is reasonably fast already.
 
OK, here is a simple test program that demonstrates the bug. I forgot to set SDL_VIDEODRIVER to omapdss first, which made it impossible to reproduce the bug. Also the double buffering means that the bug only triggers when you blit to one buffer, but not when you blit to the other. And it's a border case that only happens in quite rare blit positions, so I had to try a bit to find a problematic example. But this example reliably segfaults, at least on my Pandora.

I hope this helps to debug the problem.

I think there can be invalid reads to both the source and the target surface, I'm not sure which is the case here. I eliminated most of the problems in NubNub by clipping the target surface, but I think there can still be crashes caused by invalid reads beyond the source surface, and those are a bit harder to circumvent for me, so it would be better to fix this in SDL instead of me trying to avoid doing problematic blits.
 

Attachments

  • neon_blit_bug.tar.gz
    10.7 KB · Views: 279
Last edited by a moderator:
About SDL_FillRect(): I don't think there's much room for improvement after all. I suspect that gcc is smart enough to recognize that the "naive" default SDL code is essentially just a memset() and so it already compiles it to something very fast. Maybe you could get a tiny improvement by somehow exploiting the fact that you only have to set the RGB but not the A, but that doesn't seem likely (I doubt it really saves time to write only 24 bits per word). The bottleneck is the memory write speed: it seems to make very little difference whether you're doing VSTM's with 8 quadwords per instruction, or non-NEON STM's with 8 words per instruction.
 
About SDL_FillRect(): I don't think there's much room for improvement after all. I suspect that gcc is smart enough to recognize that the "naive" default SDL code is essentially just a memset() and so it already compiles it to something very fast. Maybe you could get a tiny improvement by somehow exploiting the fact that you only have to set the RGB but not the A, but that doesn't seem likely (I doubt it really saves time to write only 24 bits per word). The bottleneck is the memory write speed: it seems to make very little difference whether you're doing VSTM's with 8 quadwords per instruction, or non-NEON STM's with 8 words per instruction.
I don't think, that this can be done with memset magic. Memset fills bytes, but we need halfwords or words! I have a ARM snippet for fast filling a line in memory with a 16 bit color using stmia.
I will test, whether this is faster than SDL_FillRect on my gp2x and give you the results in an hour or two. :)
 
32 bit (one word) fills are what I need. I already tried some code using stmia (8 words at a time) and using vstm (32 words at a time), but couldn't get any significant speedup compared to the normal SDL_FillRect.
 
IIRC on pandora there is not much difference if you use STMs, VSTMs or STR in a tight loop for a memory fill.

Edit: somebody could write a driver to use OMAP's DMA for this purpose, that would help a bit.
 
Last edited by a moderator:
I did the test on my gp2x (with 16 bit surfaces, if you have speed problems, you should think about using only these). I filled the whole screen for 64 times every frame and got with my assembler snippet 32 fps and with SDL_FillRect 27 fps. With more filling per frame the (relative) differences is a little bit higher, because the flip isn't involved in the result that much (with 128 fillings I got 14 and 17 fps).

So, that is the snippet: https://github.com/theZiz/sparrow3d/blob/master/sparrowPrimitivesAsm.c If you are interested in using it, feel free to copy the code under every license you want, just name me somewhere. :) Keep in mind, that code works only with 16 bit surfaces!
 
I prefer 32 bit surfaces so I can have nice gradients without banding etc. But your code can be useful for stuff that uses 16 bit surfaces.
 
Furthermore it can easily be ported for 32 bit.

  • The odd halfword work around can be removed I guess
  • orr %2,%2,%2,lsl #16 of course doesn't make sense anymore
  • and the substraction have to be halfed, e.g. subs %1,%1,#22 to subs %1,%1,#11
Then it should work. Just try! I don't work with 32 bit surfaces (I decode and encode the 16 bit color on the fly for fading) and because of not owning a pandora I am not in the mood to program to the blue. ^^'
 
I tried running Forsaken (ProjectX) using this improved SDL, however it crashes (it doesn't crash before using this version, however it runs very slowly).


1370810674.457 cli:
1370810674.478 SDL compile-time version: 1.2.14
1370810674.479 SDL runtime version: 1.2.14
omapsdl: opened tslib touchscreen
omapsdl: in_evdev: found "keypad" with 84 events (type 00100013)
omapsdl: in_evdev: found "nub0" with 3 events (type 00000007)
omapsdl: in_evdev: found "nub1" with 3 events (type 00000007)
omapsdl: in_evdev: found "gpio-keys" with 16 events (type 00000023)
omapsdl: skip /dev/input/event5 as ts
omapsdl: found 5 evdev device(s).
omapsdl: detected 800x480 'lcd' (0) screen attached to fb 1 and overlay 1
1370810674.662 file: check exists 'configs/debug.txt' = exits
1370810674.667 gamma read from config as 195, max 300
1370810674.667 gamma after clamp: 195
1370810674.668 gamma variable: 195/100.0F = 1.950000
omapsdl: layer resized 800x600 -> 800x480 to fit screen
fbdev: switching to 800x600@16
fbdev: /dev/fb1: 800x600@16
fbdev initialized.
xenv: X vendor: The X.Org Foundation, rel: 11203000, display: :0, protocol ver: 11.0
xenv: display is 800x480
omapsdl: dropping unhandled flags: 10000002

Program received signal SIGSEGV, Segmentation fault.
0x00000230 in ?? ()
(gdb) bt
#0 0x00000230 in ?? ()
#1 0x407c2c30 in ts_close () from /usr/lib/libts-1.0.so.0
#2 0x401fb93c in ?? () from /home/scraft/Documents/forsaken/libs/lib/libSDL-1.2.so.0
#3 0x401fb93c in ?? () from /home/scraft/Documents/forsaken/libs/lib/libSDL-1.2.so.0
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb)


For kicks I tried exporting SDL_OMAP_TS_FORCE_TSLIB=1 before running but this didn't help.

Are there any suggestions for what I could try to resolve this? I am interested to see if I get a performance boost with Notaz SDL.

Edit: Maybe worth mentioning, I am running from Slackware Linux, I simply copied the libSDL library into my working directory before running the application (before doing this it couldn't find the video backend).
 
Last edited by a moderator:
IIRC on pandora there is not much difference if you use STMs, VSTMs or STR in a tight loop for a memory fill.

Edit: somebody could write a driver to use OMAP's DMA for this purpose, that would help a bit.

This is probably the case if you're not hitting cache and going purely through the write buffers. If you can hit L1 or even L2 cache the NEON versions should be faster, if well scheduled.
 
I tried running Forsaken (ProjectX) using this improved SDL, however it crashes (it doesn't crash before using this version, however it runs very slowly).
You should either give me your binary to reproduce this, or you should build (or I can give you) debug build of this libSDL to get a meaningful backtrace at least.
 
@notaz If it is convenient for you to provide me with a debug build of your SDL that would be helpful, I will then be able to run against it and provide a better callstack (presumably you don't have a non stripped version from the firmware to use addr2line with). I am not opposed to providing you with my binary, however I will need to check whether there are any data requirements that will also need to be packaged up.
 
@notaz thanks, I tried that, it doesn't crash using this lib, however there is also no speed improvement, but the current thinking is because it is using the mesa GL implementation which we'd expect to be slow. I tried the libGL from the other thread but that crashes (separate message posted to thread). Need to port GL -> GLES myself probably to get this doing something more useful (may well still have really bad performance).

Back on topic, do you know why your posted library would work for me whereas the one from my firmware doesn't work? Has the one in the firmware been updated at any point? If so can I execute something on the Pandora command line to verify which version I have.
 
Last edited by a moderator:
Has the one in the firmware been updated at any point? If so can I execute something on the Pandora command line to verify which version I have.
To check what version you have installed in the firmware I'd use:

sudo opkg list-installed libsdl*


To check if there are any updates available:

sudo opkg update
sudo opkg list-upgradable


- Neelix
 
notaz: sorry, mistake on my part, I hadn't activated the your backend in my previous test in my environment. Using the lib that you posted first (not from ED's server), I get the following:


Program received signal SIGSEGV, Segmentation fault.
0x00000230 in ?? ()
(gdb) bt
#0 0x00000230 in ?? ()
#1 0x4093fc30 in ts_close () from /usr/lib/libts-1.0.so.0
#2 0x40203a24 in omapsdl_input_finish () from /usr/lib/libSDL-1.2.so.0
#3 0x401e3964 in SDL_VideoQuit () from /usr/lib/libSDL-1.2.so.0
#4 0x401bc564 in SDL_QuitSubSystem () from /usr/lib/libSDL-1.2.so.0
#5 0x401bc624 in SDL_Quit () from /usr/lib/libSDL-1.2.so.0
#6 0x0009b4dc in CleanUpAndPostQuit () at main.c:386
#7 0x0009ba20 in main (argc=1, argv=0xbefff2a4) at main.c:651


So I need to do some more investigating while it decides to quit most of all.

EDIT: Most suspicious thing is that it uses SDL_OPENGL as a flag to SDL_CreateWindow, which returns NULL. In the stock SDL, it then goes onto use the mesa GL implementation, but perhaps the optimized version doesn't bother with this as it knows it'll be slow? Not sure.
 
Last edited by a moderator:
Back
Top