Hi. :) ......


MadDog

Member
Joined
Mar 4, 2006
Messages
262
Age
55
Location
UK
Website
www.maddoggames.com
Hi, a few years ago I played about with gp2x and had a lot of fun but then sold it all when work got heavy and had no time. Now for the last year or so I've been working with OMap chips and other ImgTech based SOC's and I just thought i'll pop by and give a warning about the discard function in a pixel shader.

In the docs they do say it's slow but there is a rather nasty gotcha in there. Because GLES 2.0 shaders can not share constants as you can in DX you tend to write uber shaders. For the most part this is fine but the discard function (mainly used for 1bit stencil type ops) is slow. This is because the hardware does the Z rejection before the pixel shader is ran and so if discard is called the chip has to do a 'loop back' to correct the z buffer, this can half the performance. Now here is the gotcha! The shader compiler when it sees discard being used flags the shader and this loop back is turned on for all pixels even if discard is not used (normally via a uniform being set to false).

I found this out with the software I'm working on, its 3D satnav (buildings, landmarks, land scape, looks like Google earth in your hand). I was using 1bit alpha and so discard so that the landmarks can have complicated outlines without needing 1000's of polygons. When I added the code the app halved it's frame rate! After TI looking into this it turned out because I 'may' use the discard in the shader the loop back was on 100% of the time. I'm currently trying to get a gl extension added so that this loop back can be turned off by the app when the uniform used to toggle 1bit alpha is set to false.

Currently the best way round the problem is to separate the shaders out so that discard is only in the shader if it's needed for every pixel. It is an odd situation to be in as in the 20 years I've been writing 3D games / apps 1bit alpha has always been the fastest type of alpha available. Some of the first 3D cards only had 1bit (stencil) alpha! :)

Anyway, as soon as this little baby is out I'm going to get one, the omap3 chip is a peach of a chip. From what I've been doing, think Xbox1 or GC type power but with better shaders! It's very fast, I love it.

RealityMobile.jpg
 
Last edited by a moderator:
'centralnoise' said:
Well that sure is a weird bug.

Nice shots btw =D.
Thanks. :)

It's not a 'bug' but just the way it has to work because of them doing the Z buffer before the pixel shader. I'm sure it can be corrected with a gl extension to allow manual manipulation of the loop back circuit. But yes, to me too it's a bug. ;)
 
Last edited by a moderator:
'MadDog' said:
Currently the best way round the problem is to separate the shaders out so that discard is only in the shader if it's needed for every pixel.
Well, I wouldn't think of this as a workaround but rather the preferred way of doing things :)

Personally I am not that big about uber-shaders precisely because you are at the mercy of your compiler and the hardware implementation.
A small set of robust shaders (by robust I mean relatively complicated but without drastically different internal logic paths ) grouped together in the render queue would imho be safer
 
Last edited by a moderator:
Yes I normally do a set of shaders aimed at the jobs I need but after playing with differed lighting on the PC I'm starting to like uber shaders. ;)

The problem I've found with GLES 2.0 shaders is that you can not share uniforms between shaders as you can with DX so when you change shaders you need to update all the uniforms of the new shader that have changed since it was last used. This is not too bad if the game is using a bespoke 3D engine that is fixed to a platform and used just for that game. (the best way to get speed). But when an application is using a 3rd party engine then it's hard to avoid unnecessary uniform uploads. I can see why GLES 2.0 does not allow you to lock a uniform to a shader register (again can be done in DX) as it could cause an app being locked to a small subset of hardware. But not allowing shader variables and having the compiler deal with them is an oversight in my mind. Although I've never had to create a portable interface for 3D hardware where the targets can be so very different and could be very constrained.
 
Last edited by a moderator:
I don't know a lot about shaders, but are you basically saying that the discard call might be extremely slow in some cases?

And that there's a circuit that becomes very slow whenever there's a chance of a discard call, so I should separate my rendering from my alpha testing to avoid using this discard circuit as much as possible?

:unsure:

I'll keep it in mind as soon as I figure it out. Nice screenshots btw.
 
Last edited by a moderator:
'lulzfish' said:
I don't know a lot about shaders, but are you basically saying that the discard call might be extremely slow in some cases?

And that there's a circuit that becomes very slow whenever there's a chance of a discard call, so I should separate my rendering from my alpha testing to avoid using this discard circuit as much as possible?

:unsure:

I'll keep it in mind as soon as I figure it out. Nice screenshots btw.
Yes, that's it in a nut shell. Apart from that little gotcha the chip is very powerful.
 
Last edited by a moderator:
'lulzfish' said:
I don't know a lot about shaders, but are you basically saying that the discard call might be extremely slow in some cases?

And that there's a circuit that becomes very slow whenever there's a chance of a discard call, so I should separate my rendering from my alpha testing to avoid using this discard circuit as much as possible?

:unsure:

I'll keep it in mind as soon as I figure it out. Nice screenshots btw.
This is not something that they will be able to "fix" easily because it is tied to the way their tile based renderer works.

Tile based rendering offers a lot of advantages but , as with everything, there are some tradeoffs and unusually slow performance of alpha-testing is one of them.
 
Last edited by a moderator:
How does using full on alpha blending with zero alphas compare in speed?
 
Does ImgTech knows this?
they might have checked if some functions work, but not actually benchmarked all of them, so they may not know this operation is slow. it's a nice thing to warn them, even if only for them to know about it
 
Last edited by a moderator:
I remember on the Dreamcast we had to separate polygons into solid, punch through & alpha, it had a display list for each type. Because of the way the tile architecture worked solid polys were very fast with overdraw not being an issue. Punch through and alpha were progressively slower so we used punch through for alpha masks and alpha blending only when we absolutley had to. It did sort alpha though which was double plus cool.

Vidi well droogies: http://www.youtube.com/watch?v=ep4_lV62KJs...feature=related

Even with youtube video compression you can see the moire in the fences.
 
Last edited by a moderator:
'.Gogeta§§J4BR.' said:
Does ImgTech knows this?
they might have checked if some functions work, but not actually benchmarked all of them, so they may not know this operation is slow. it's a nice thing to warn them, even if only for them to know about it
Do they know ?

-----------------------------
PowerVR SGX.OpenGL ES 2.0 Application Development Recommendations.1.1f.External.pdf (part of their SDK)

Section 7.10 Discard
....
On PoverVR SGX discard is an expensive operation because it requires a fragment shader pass to accurately determine visibility of fragments. The visibility information then has to be fed back to the Image Synthesis Processor before continuing to perform the depth and stencil test for other polygons.

For this reason you should avoid discard whenever possible.
.....
--------------------------------

As you can see they are not exactly hiding this information ...
 
Last edited by a moderator:
'Exophase' said:
How does using full on alpha blending with zero alphas compare in speed?
quite favorably.

every gpu architecture i have touched upon for the past 5 years has had issues with fragment* discard, same with depth-update shaders - these ops are disruptive to the flow of a contemporary fragement pipeline, and should be avoided whenever possible.

particularly regarding discard - its cost across gpu hw generations varied from no-big-difference (when it employed whatever facilities were there to masks off your color and depth outputs - but that's on dx7-class hw), to a prohibitively-expensive no-no giant-if-class of op these days. and it is always on a per-shader basis - nothing is per-fragment (as all fragments of a shader share the same flow control, otherwise things like partial derivatives would never work).

@.Gogeta§§J4BR.

imgtec do not need to be told about this - they are crystally aware of the bottlnecks in their own architectures. actually they warn the developers aginst these pitfalls at every given occasion - check their img insider program newsletters, particularly the columns on mbx/sgx optimisation techniques.

heck, their sgx Application Development Recommendations doc has a section dedicated on discard:

QUOTE
7.10. Discard
The GLSL ES fragment shader operation discard can be used to stop fragment processing and
prevent any buffer updates for this fragment. It provides the same functionality as the fixed function
alpha test in a programmable fashion.
On PowerVR SGX discard is an expensive operation because it requires a fragment shader pass to
accurately determine visibility of fragments. This visibility information then has to be fed back to the
Image Synthesis Processor (ISP) before continuing to perform the depth and stencil test for other
polygons.
For this reason you should avoid discard whenever possible. Often the same visual effect can be
achieved using the right alpha blend mode and forcing the alpha value to 0 where discard would be
used. If you really need to use discard, make sure that you render objects using this shader after all
opaque objects have been submitted.


* in gl terms: the pixel before reaching the ROPs stage

ed: bah, beaten to the punch.
 
Last edited by a moderator:
Yer they know, it was them who told me why my frame rate had halved. What I had made the mistake of thinking was if my shader had an uniform to enable / disable discard that it would not effect things whilst the 'if' was false. But as it turns out this is not true, just having discard in the shader means GLES 2.0 has to turn on the loop back for every pixel because discard may be used. It's a little 'gothca' that could be made even after reading their documentation. It's because the shader compiler has no way of knowing how you control if discard is going to be used or not. The only way to allow discard to be in a shader and not effect the frame rate when not used is for there to be a GL extension that will allow the app to have control of this loop back. But even this may be problematic without knowing the how it is controlled via the driver it's all speculation.

The bottom line is, if you need discard in a shader, then put it in it's own shader.

I posted this warning because it seemed an easy trap to fall in and it will have a serious impact on the frame rate. Did not want to see loads of posts with people wasting time trying to workout why their game is slower than everyone else's.

If you can do it with normal alpha blend then do so, although this is not a perfect solution as you need the sold pixels to write to the z buffer and normal alpha blending is done with the zbuffer write turned off.
 
Last edited by a moderator:
Back
Top