Hi,
I'm working on porting an old, half-finished project of mine over to GLES2 on the Pandora, but I've hit what appears to be a driver or kernel bug which results in a full-system lockup when rendering to a texture (using a framebuffer object). I'm running Hotfix 5 & using a cross-compiler toolchain from (IIRC) http://blogs.distant-earth.com/wp/?p=109.
I've tried to put together a reasonably self-contained testcase (which isn't actually meant to do anything in particular, apart from exhibit the crash):
http://panic.cs-bristol.org.uk/~jules/rtt-crash.tar.gz
TBH I have no idea where to begin tracking the problem down. Here are a few observations which may or may not be helpful:
1. A few frames are rendered (to the texture), either correctly or not (I'm not sure). An FPS counter writing to stdout usually writes one value before the system crashes, corresponding to ~1 seconds' worth of frames at 60fps.
2. Commenting out the call to glBindFramebuffer() which redirects rendering to the FBO stops the code from crashing (search for "NO_CRASH_IF_THIS_IS_COMMENTED_OUT").
3. Reducing the value of "unsigned int numpolys = 100" in render_shadow to, say, 10 makes the program run for longer, suggesting some kind of resource depletion issue (something not being freed per-frame?). Using a value of 50 makes the program run for ~2 seconds before locking up the system.
4. The render-to-texture example from the PVR SDK (Training Course "09_RenderToTexture") works fine for me. I can't figure out what the important difference is from my code.
5. GDB doesn't seem very helpful. A "crashed" binary seems to be spending a lot of time in ioctl.
The test case should compile with just "./compile.sh", then copy rtt-crash, shadowcast.vtx, shadowcast.frag to the target & run. (Bits of this code are stolen from all over the place without attribution, so sorry about that . I've also cut it down quite quickly & dirtily from the larger project, so it probably doesn't make much sense any more).
So: has anyone played with this stuff before, and can confirm if it works? Can anyone see anything blatantly stupid I've done? Has anyone perhaps got a system running a later kernel (with SGX drivers enabled) who can test it for me? (I didn't have any luck building the last myself).
I'd welcome any comments or ideas. TIA!
Cheers,
Jules
I'm working on porting an old, half-finished project of mine over to GLES2 on the Pandora, but I've hit what appears to be a driver or kernel bug which results in a full-system lockup when rendering to a texture (using a framebuffer object). I'm running Hotfix 5 & using a cross-compiler toolchain from (IIRC) http://blogs.distant-earth.com/wp/?p=109.
I've tried to put together a reasonably self-contained testcase (which isn't actually meant to do anything in particular, apart from exhibit the crash):
http://panic.cs-bristol.org.uk/~jules/rtt-crash.tar.gz
TBH I have no idea where to begin tracking the problem down. Here are a few observations which may or may not be helpful:
1. A few frames are rendered (to the texture), either correctly or not (I'm not sure). An FPS counter writing to stdout usually writes one value before the system crashes, corresponding to ~1 seconds' worth of frames at 60fps.
2. Commenting out the call to glBindFramebuffer() which redirects rendering to the FBO stops the code from crashing (search for "NO_CRASH_IF_THIS_IS_COMMENTED_OUT").
3. Reducing the value of "unsigned int numpolys = 100" in render_shadow to, say, 10 makes the program run for longer, suggesting some kind of resource depletion issue (something not being freed per-frame?). Using a value of 50 makes the program run for ~2 seconds before locking up the system.
4. The render-to-texture example from the PVR SDK (Training Course "09_RenderToTexture") works fine for me. I can't figure out what the important difference is from my code.
5. GDB doesn't seem very helpful. A "crashed" binary seems to be spending a lot of time in ioctl.
The test case should compile with just "./compile.sh", then copy rtt-crash, shadowcast.vtx, shadowcast.frag to the target & run. (Bits of this code are stolen from all over the place without attribution, so sorry about that . I've also cut it down quite quickly & dirtily from the larger project, so it probably doesn't make much sense any more).
So: has anyone played with this stuff before, and can confirm if it works? Can anyone see anything blatantly stupid I've done? Has anyone perhaps got a system running a later kernel (with SGX drivers enabled) who can test it for me? (I didn't have any luck building the last myself).
I'd welcome any comments or ideas. TIA!
Cheers,
Jules