SGX GLES 1.1 performance issues


TheGoodDoktor

Still Fresh
Joined
Sep 6, 2008
Messages
74
Hi

So I've managed to get my rendering engine running on a Beagleboard using the SGX. At the moment I'm using GLES 1.1 and I've been having a lot of trouble getting the scene to render at a decent frame rate.
The framerate definitely seems to dependant on where the camera is and at the slowest I can actually see the frame draw in 2 halves (or so it looks)
My first thought was that I had alpha testing or blending switched on all the time but switching this off does nothing for the framerate.
I'm pretty sure it's not CPU related as a debug & optimised build run at the same speed.
I was thinking that it could be something to do with the SGX driver implementation of GLES 1.1 but I was wondering if anyone has seen this happen.
I am in the process of writing a GLES 2.0 version of the renderer which was always part of the plan so I will be able to see if this makes a difference.
I'm running at 1280x768 which is admittedly quite high but I don't think that explains the huge variance in frame rate I've been experiencing (<1fps to ~30fps).

Cheers,
TheGoodDoktor
 
How about running with a profiler?
The beagleboard wiki also says that there are problems with the sgx driver at 1024x768 - there is a visible vertical line splitting the frame - so that might be your problem, maybe the problem is at all resolutions with 768 pixels as height.
 
Cheers for the quick replies!
I forgot to mention that I managed to get another game running it 1280x768 with no performance problems although this one was only in 2d.
I will try and run it at something like 640x480 but changing res on the Beagleboard is a bit of a pain which is why I haven't tried it yet.
 
Changing the resolution is no problem as its just requires to change the boot args Iirc - baking the SGX driver is a lot harder, or is the SGX driver included in the beagleboard-demo-angstrom image already?
 
Yes. I think I was a little peturbed as it took me so long to get it running at the current res. I also need to change my SDL initialisation as well.
I'll get the beagleboard out and give it a go tomorrow although I have my doubts this is the problem due to the varience in framerate I experienced on a scene with purely opaque polygons.
 
TheDoktor said:
Yes. I think I was a little peturbed as it took me so long to get it running at the current res. I also need to change my SDL initialisation as well.
I'll get the beagleboard out and give it a go tomorrow although I have my doubts this is the problem due to the varience in framerate I experienced on a scene with purely opaque polygons.
since you mentioned it's scene-content-dependent, you need to check the following things:

* current number of draw calls - pay close attention if that changes at the dives. a non-optimal driver can hit you badly there.
* current geometric complexity - # vertices, #triangles
* current texture usage- keep per-frame running sum of your tex size used. also, try to tell if tex swapping occurs - that one can be tricky, as gl is rather thick-skulled when it comes to fine control over texture management. i usually check the texture resident status before use using the respective tex and draw conclusions based on that.
 
I've got pretty good performance results from cpasjuste about running Irrlicht with the ogl-es 1.1 drivers. A fully lightmapped q3 scene had already more than 20 FPS without any optimizations. This was running on the pandora resolutions, though. Should get much higher with VBOs and stuff, but since I don't have the necessary hardware these tests and optimizations have to wait some more.
 
I had a brief talk with TI about this today, they said if you are using blit mode on the sgx don't - it's broken and slower than software blits. Instead use direct to the framebuffer.
 
I've done some tests now.
Changing the res to 64x480 didn't make any noticeable different, neither did disabling texturing.

I've done some basic stats recording and this is typically what I get:

Frame Time 228 MS
[Misc.] Texture Changes 0, Draw Calls 0, Vertices 0
[2D UI] Texture Changes 6, Draw Calls 8, Vertices 1535
[GMesh] Texture Changes 341, Draw Calls 1967, Vertices 8983
[Skel.] Texture Changes 60, Draw Calls 3501, Vertices 18054
[Model] Texture Changes 0, Draw Calls 0, Vertices 0
[Part.] Texture Changes 0, Draw Calls 0, Vertices 0
Total: Texture Changes 407, Draw Calls 5476, Vertices 28572

The total is probably the best figure to look at as the categories aren't 100% correctly assigned.
 
TheDoktor said:
I've done some tests now.
Changing the res to 64x480 didn't make any noticeable different, neither did disabling texturing.

I've done some basic stats recording and this is typically what I get:

Frame Time 228 MS
[Misc.] Texture Changes 0, Draw Calls 0, Vertices 0
[2D UI] Texture Changes 6, Draw Calls 8, Vertices 1535
[GMesh] Texture Changes 341, Draw Calls 1967, Vertices 8983
[Skel.] Texture Changes 60, Draw Calls 3501, Vertices 18054
[Model] Texture Changes 0, Draw Calls 0, Vertices 0
[Part.] Texture Changes 0, Draw Calls 0, Vertices 0
Total: Texture Changes 407, Draw Calls 5476, Vertices 28572

The total is probably the best figure to look at as the categories aren't 100% correctly assigned.
if the bolded numbers are correct then don't expect to see any change by resolution - you are entirely draw-calls limited.

a modern, decently optimised gl driver tolerates about 400-700 (depending on what not) draw calls/frame before you become draw-calls limited. d3d10 has better tolerance (1-1.5k). but the numbers you have there are simply gargantuan - there's nothing the gpu can do to speed things up at this stage.
 
Draw calls and texture changes seem to your problem just like blu noted.
I get away with about 100 draw calls for EVERYTHING and about 10 shader changes and 100 texture changes for a whole feature blown mountain scene. (+ some additional render stages for impostors etc - but thats not every frame)
(In OpenGL ES 2.0 though)
 
blu said:
TheDoktor said:
I've done some tests now.
Changing the res to 64x480 didn't make any noticeable different, neither did disabling texturing.

I've done some basic stats recording and this is typically what I get:

Frame Time 228 MS
[Misc.] Texture Changes 0, Draw Calls 0, Vertices 0
[2D UI] Texture Changes 6, Draw Calls 8, Vertices 1535
[GMesh] Texture Changes 341, Draw Calls 1967, Vertices 8983
[Skel.] Texture Changes 60, Draw Calls 3501, Vertices 18054
[Model] Texture Changes 0, Draw Calls 0, Vertices 0
[Part.] Texture Changes 0, Draw Calls 0, Vertices 0
Total: Texture Changes 407, Draw Calls 5476, Vertices 28572

The total is probably the best figure to look at as the categories aren't 100% correctly assigned.
if the bolded numbers are correct then don't expect to see any change by resolution - you are entirely draw-calls limited.

a modern, decently optimised gl driver tolerates about 400-700 (depending on what not) draw calls/frame before you become draw-calls limited. d3d10 has better tolerance (1-1.5k). but the numbers you have there are simply gargantuan - there's nothing the gpu can do to speed things up at this stage.

It does look like the draw calls doesn't it!
It generally works by doing a draw call per tri-strip, which is a bad thing. I'm a bit out of touch with rendering APIs (as you've probably guessed!) so I need to do a bit of studying!
The renderer is still at the 'just get it working' stage but I guess now is the time to strip it down and optimise it piece by piece - something which I was planning to do anyway.
On a positive note I almost have a fully working commercial game running on the Pandora!

Thanks for all your help & advice!
TheDoktor
 
I'd strongly recommend using indexed triangle lists, i.e. glDrawElements(GL_TRIANGLES, ...). Also make sure all your vertex data (including indices) is stored in buffer objects and using the smallest adequate data type. This can make quite a performance difference.
 
TheDoktor said:
It does look like the draw calls doesn't it!
It generally works by doing a draw call per tri-strip, which is a bad thing. I'm a bit out of touch with rendering APIs (as you've probably guessed!) so I need to do a bit of studying!
The renderer is still at the 'just get it working' stage but I guess now is the time to strip it down and optimise it piece by piece - something which I was planning to do anyway.
i understand you - you're porting code that was written for a light-weight HAL (no problem with draw calls), which preferred tristrips. i'd suggest you follow xmas' advice and index all strips (even de-stripify them into lists), then batch together everything originating from the same object space.

On a positive note I almost have a fully working commercial game running on the Pandora!
..which was my ulterior motive behind posting in this thread ; )
 
The original code wrote tri-strips directly to the command list that was sent to the gfx chip.
I'm now using indexed triangle lists using the VBO for the original tri-strips.

I now normally typically get:
Frame Time 34 MS
[Misc.] Texture Changes 0, Draw Calls 0, Vertices 0
[2D UI] Texture Changes 6, Draw Calls 8, Vertices 1535
[GMesh] Texture Changes 143, Draw Calls 164, Vertices 5931
[Skel.] Texture Changes 9, Draw Calls 8, Vertices 6450
[Model] Texture Changes 0, Draw Calls 0, Vertices 0
[Part.] Texture Changes 0, Draw Calls 4, Vertices 16
Total: Texture Changes 158, Draw Calls 184, Vertices 13932

Maxing out at:
Frame Time 53 MS
[Misc.] Texture Changes 0, Draw Calls 0, Vertices 0
[2D UI] Texture Changes 6, Draw Calls 8, Vertices 1535
[GMesh] Texture Changes 261, Draw Calls 310, Vertices 18432
[Skel.] Texture Changes 15, Draw Calls 16, Vertices 16317
[Model] Texture Changes 0, Draw Calls 0, Vertices 0
[Part.] Texture Changes 0, Draw Calls 0, Vertices 0
Total: Texture Changes 282, Draw Calls 334, Vertices 36284

As you can see I'm still not getting 30fps. I thought it might be texture changes but removing them made no difference. It could be CPU but debug & release builds aren't noticeably different. I think I'll put some millisecond timers throughout the game loop to get a better idea whats going on.
I also seem to be losing input from SDL after a while, this might be because I'm not polling it enough (I poll it every game loop). I still have quite a lot of stability issues which I don't get on any other platform, it might be drivers I guess.

Almost there!
 
glad to hear about the progress!

for the sake of experiment, can you try for those worst-case scenes to decrease the vertex count by 50%? say, invoke glDrawElements with a halved count argument and see what happens.
 
200-300 draw calls .. that's still a lot for a device like this.

I would look into that.
 
warmi said:
200-300 draw calls .. that's still a lot for a device like this.

I would look into that.

I have been and still are.
Can you provide any links to documentation that recommends acceptable ranges?
 
i'm not aware of such documented recommendations, not for the omap anyway (xmas might be able to help us here, though). generally, it's a matter of multiple factors, so at the end it's all empirical.

you can, though, measure the operational parameters of a given graphics pipeline by crawling a metrics space defined by axes of:
1. number of draw calls per frame
2. complexity of each individual draw call (vertices, indices)
3. a bunch of other potential variant parameters of your choice.

and locate the sweet/bitter spots in that space.
 
Back
Top