Release TinyGLES


lunixbochs

Moderator
Staff member
Joined
Sep 18, 2011
Messages
742
OpenGL(ES) on the Pandora has a massive bottleneck (other than the fact it's a GPU from 2005): texture uploads are extremely slow, and require an inordinate amount of CPU time.

I've toyed with solutions for a while, and I'm here to announce a new project.

TinyGLES is a software-rendered OpenGL ES library aiming to fully replace the SGX for 2D rendering (with much higher framerates) and basic 3D rendering. It's an extension of my work on glshim and a revival of an old project by Febrice Bellard called TinyGL.

My current reference OpenGL branch of Uplink with batched pixel rendering (which means no z-order for glDrawPixels and glBitmap) runs at 1-4fps on my CC Pandora using glshim and the SGX 530 GPU.

If I use TinyGLES, Uplink framerate (still on my CC) is currently closer to 35-60fps, with a potential 20fps+ boost if I remove X11 from the equation. This includes z-ordering, so layers overlap properly. I could guess at another 30-100fps+ boost by optimizing the 3D rendering pipeline with NEON and adding special cases for obvious 2D rendering paths.

Status:

    - Beta-quality.

        - I'm still really bad at NEON.

    - Some crashes.

    - Visual problems in some cases.

    - Incomplete GL ES feature list (I got a 20fps boost in Uplink just by disabling my debug "STUB" prints)

    - Currently requires you to use glx instead of egl (and honestly works much better with glshim in front of it).

Goals:

    - Fully NEON-optimized rendering (the code is mostly C right now)

    - Full OpenGL ES 1.x compatibility (it's okay if glshim is used for some of this)

    - Some OpenGL functionality that would be much faster to include directly (or virtually impossible to bolt on with glshim)

        - glDrawPixels (because we can directly write to the internal framebuffer)

        - glRenderMode is far easier (GL_SELECT is already supported)

        - glReadPixels can use the depth buffer (used in Cube and AssaultCube)

I may eventually integrate this completely into glshim, as a helper (for stuff like GL_SELECT) and as an optional rendering backend.

Usage:

    - Grab the NEON branch from here: https://github.com/lunixbochs/tinygles

    - Grab the tinygles branch from glshim: https://github.com/lunixbochs/glshim

    - Build both projects, throw the libs in your library path (you might need to link multiple variants of libGL.so, libGL.so.1, libGL.so.1.2.0)

This glshim branch has a new environment variable called "LIBGL_GLES" which is used as an override for the path of the GLES client library.

I'll see about putting together a demo pnd for this.
 
Last edited by a moderator:
This could be the thing to get those extra frames needed for a lot of stuff like goonies, rRootage or Super Maryo Cronicles.
 
software (CPU) gles that is faster than hardware (GPU) gles?  Wut? Wow...
 
Wow, nice! Did you have any benchmark data, for example how long does a 800x480 alpha blended blit take? I am assuming these sort of operations are going to tie up the CPU/memory so I was just trying to get a feel for how much frame time would be used in a game that overdraws the screen a few times per frame.
 
The texture uploads may be slow, but you can avoid them with sparse textures (GL_TEXTURE_3D) or atlases.

This is more doable with 2D games, but with 3D games the texture uploads are annoyance indeed.
 
I find it quite sad to think that the hardware got some so serious bottleneck that we get better results using software+neon rather than the specific hardware...
 
I find it quite sad to think that the hardware got some so serious bottleneck that we get better results using software+neon rather than the specific hardware...
That's true, but you can see here how a community and talented people can keep a device alive. If there is some limitation, try to find a way around it.


This is why I love such devices.


The Iphone3gs what has basically the same soc as the Pandora is just abandoned and won't profit from any of the stuff that is developed now for Pandora, n900 etc.
 
I find it quite sad to think that the hardware got some so serious bottleneck that we get better results using software+neon rather than the specific hardware...
That's true, but you can see here how a community and talented people can keep a device alive. If there is some limitation, try to find a way around it.
This is why I love such devices.


The Iphone3gs what has basically the same soc as the Pandora is just abandoned and won't profit from any of the stuff that is developed now for Pandora, n900 etc.
And this why I'm so excited for the Pyra. I'll be keeping my Pandora and hopefully the community can continue to grow and support both. I will finally have a reason to own more than one of these beautiful micro computers and maybe even get some multiplayer games working between them!
 
Thus far games I've managed to get ingame with some performance benefit are Bitfighter, Uplink, and Jumpman.

Wow, nice! Did you have any benchmark data, for example how long does a 800x480 alpha blended blit take? I am assuming these sort of operations are going to tie up the CPU/memory so I was just trying to get a feel for how much frame time would be used in a game that overdraws the screen a few times per frame.
I have no idea yet! :) It only works with some examples and one game thus far, so performance will be a moving target.

The texture uploads may be slow, but you can avoid them with sparse textures (GL_TEXTURE_3D) or atlases.

This is more doable with 2D games, but with 3D games the texture uploads are annoyance indeed.
I'm running into workloads that require (at least) a texture upload per frame, and that's still pretty slow (and in my case still doesn't result in very accurate rendering).
 
Last edited by a moderator:
Just to understand it right:


You want to use it with glshim to speed up the GL functions that are slower on the gpu than on the cpu with neon code to make the Pandora use whatever is faster automatically?
 
The main benefit integrating in mainline glshim is features glshim shouldn't need to do on its own (like GL_SELECT) and the ability to switch out the entire renderer (not just parts) for when a software rendered scene would be faster.

I think the SGX is too limited to do a hybrid mode where I render a major part of the scene in software and part in hardware. I can't read the depth buffer afaik.
 
Last edited by a moderator:
Back
Top