Graphics Concept

Do you want to see this in future Pandora games?


  • Total voters
    49

dflemstr said:
JayFoxRox said:
You can discard pixels which don't belong the sphere in the pixelshader.
Not in practice, because GLSL::discard is too slow on the SGX iirc (someone made a post about this a couple of months ago). You shouldn't use it, ever. It's faster to just return alpha=0 and use ARGB textures.

I don't think this is actually the same. Correct me if I'm wrong, but the way I understand things to work an entire primitive has depth update on or off; it can't work per-pixel. Meaning that if you have a primitive that's only partially transparent you either end up with the opaque parts not updating depth or the transparent parts updating depth, either of which will cause render order problems unless you sort the primitives.

What I don't understand is why discard seems to blow performance for the entire scene, even if it's only being used for a few fragments. I'd think that at worst it should only hurt the tiles it's used in.
 
Last edited by a moderator:
dflemstr said:
So you can't pick pixels like in a ray tracer? You have to test enough ys or xes until your image is of a high enough quality. How is this usable for a game, and how can you gain real-time performance?

Because I'm using LuxRender as we speak to render a very simple, but extremely realistic image of a sphere, using the same technique of "firing photons". It has so far taken my quad-core computer with an OpenGL 3.1 grahics card about 13 hours to render the image and I'm not even satisfied yet:


Can your system produce a similarly smooth but shadable surface (as Exophase was asking you if you could) so that I would be able to, let's say, add a per-pixel lighting and shadow volume shader?
I'm not sure what you mean by picking pixels, but the performance is very good. The calculation for each line of the sphere is roughly as complex as the calculation for a single pixel using raycasting. The pixels that are drawn will have to undergo the same raycasting calculation to determine the depth/normals, but only as required. The depths can be approximated most of the time because, as JayFoxRox points out, intersections aren't too likely to occur.

I wrote the prototype on my TI-83+, interpreted in TI-BASIC, and it takes 3 seconds to draw, so I know it's fast. Compare that with the 2 seconds for the built in circle function to finish. At least calc84maniac should know what I'm talking about.

About LuxRender, is that using global illumination? My system should produce a similarly smooth image with anti-aliasing. My answer to Exophase's question was yes, even if I disguised it somewhat. And shadow volume shading? I don't know exactly how that works, so I can't answer right now. Making use of raytracing techniques aught to work, but could be slow.
 
Last edited by a moderator:
Exophase said:
What I don't understand is why discard seems to blow performance for the entire scene, even if it's only being used for a few fragments. I'd think that at worst it should only hurt the tiles it's used in.
I think it's because it screws with the pipelining. If you use discard, the tiling mechanism become useless for those pixels. Then it has to go back to the tiling phase and re-evaluate the tile using all of the geometry for that tile, except what was just discarded.

@ your previous comments:
You might be right about the near/far regions. I don't know.

What I was trying to say, was do the same thing in software as GPUs do (rasterization), except with different shapes. Then pass the job off to the SGX. It might not be very efficient with the SGX, but it should be possible, like you said at the end. What I really wanted was to do all the rasterization in software and all the pixel shading in hardware.
 
Last edited by a moderator:
Mr.Confuzed said:
About LuxRender, is that using global illumination?
LuxRender doesn't use any light simulation techniques; it literally fires photons at the objects you want to raytrace, generating 100% realistic and physically correct images. It doesn't really use rays, but just emits photons form light sources until the "camera" has received enough photons to draw an image. That's how I at first understood your algorithm to work, but as it looks now, that possibility is eliminated, and you either have to solve the equations and draw the ellipse perimeter which can be really inefficient if you want smooth edges (because how short do you want to make your segments? etc), or you have to develop a "pixel picking" process à la a fragment shader, where your program iterates through all of the pixels, and your algorithm gets a location (similar to gl_Position in GLSL2.0) and returns true/false for contained/not contained statuses (analogue to gl_FragColor).
Mr.Confuzed said:
And shadow volume shading?
For shadow volumes, you have to have vertices that define the shape of an object in order to calculate its shadow. Those vertices must be dense enough in order to produce a realistic shadow, but sparse enough to allow for a high frame rate... In worst-case volume shaders, you will have to integrate every primitive defined by 3 vertices separately in order to generate the volume shadow, which takes ages to do (again, note: with a worst-case algorithm)

This doesn't seem possible with your algorithm ,as long as you don't spit out a triangle fan or something that can be worked with in such a shader.
 
Last edited by a moderator:
Mr.Confuzed said:
I think it's because it screws with the pipelining. If you use discard, the tiling mechanism become useless for those pixels. Then it has to go back to the tiling phase and re-evaluate the tile using all of the geometry for that tile, except what was just discarded.

This shouldn't be the case. Discarding pixels can't add primitives to the tile bin. While it's true that an entire tile may be occluded and thus not binned I expect that such a thing would be turned off if what's occluding it has either alpha test or alpha blend turned on. This isn't as much of a significant optimization as it sounds like, despite seeming like the major point of TBDR. In reality, the SGX has four times as many depth/stencil comparators as it has texture fetch units and shader units, so you have to have an average overdraw rate of worse than 4 before the inner-tile occlusions become the bottleneck. This is only for dead simple single cycle rendering at that, where there's one texture fetch that's shaded in one cycle, sustained. For anything more complex the chance for bottleneck would be substantially lower.

What this means is that when an alpha discard occurs the tile renderer should just have to redo that pixel. In an MBX overview document this is described as "punch through" logic. What I would expect to happen is for all of the punch throughs to end up on a list of pixels to redo and for the tile to be redrawn from the start for those pixels (with the offending primitive removed from the bin). At worst it should take two tile renders only for tiles that have discarded pixels (unless the discards reveal more discards), but what I hear is that having alpha test on completely halves the performance. It just doesn't sound right.

Mr.Confuzed said:
What I was trying to say, was do the same thing in software as GPUs do (rasterization), except with different shapes. Then pass the job off to the SGX. It might not be very efficient with the SGX, but it should be possible, like you said at the end. What I really wanted was to do all the rasterization in software and all the pixel shading in hardware.

And what information has to be passed along to do this shading? It sounds like you'll have to do a lot of texturing, but it could sorta work. Maybe with a float texture you could pass coordinates pretty easily, including depth.
 
Last edited by a moderator:
Mr.Confuzed said:
You start with your shape equation (sphere).
:{2 + 2 = 4}

This equation provides two solutions: the top and bottom of the sphere projection. Then you repeatedly feed y-values into one of the previous equations to produce pairs of x-values. Voila! Instant rasterization! I hope that clears things up, lol.
As an artist, that to me was about as clear as mud. :p Hope you don't mind I clipped the math to save space.

Actually, if I look at it long enough, it starts to make a small, vague amount of sense, but not nearly enough for me to figure out what's happening visually.

I guess the real question becomes "will this be more efficient and render faster than the same shape made up of polygons? (this vs. enough polygons to look smooth + fancy shaders & textures?)"

As I mentioned in my previous post, I can see where this could be of use, but if it impacts performance I'd much rather suffer the visible polygon'd edges than lose a few fps. If you could get this to work on a fully-skinned character, however, I'd love to see this work. I'm gonna keep an eye on this thread to see where this goes. :)
 
Last edited by a moderator:
Exophase said:
And what information has to be passed along to do this shading? It sounds like you'll have to do a lot of texturing, but it could sorta work. Maybe with a float texture you could pass coordinates pretty easily, including depth.
You'll need to pass the depth and probably some kind of pointer indicating which shader to run. The shaders might need some data too.

@ -Tj-, the answer to the real question depends on your definition of 'enough polygons to look smooth'. I'm thinking it will turn out to have roughly the same efficiency. The advantage of my method is less vertex data, a more direct approach; the advantage of the usual method is hardware acceleration. If I'm right, you should end up with better/faster graphics if only due to the fact that you have two processors working on it. That is, unless you have something else to do with the DSP.

Well, I think I have the information I wanted. Now I guess I need to make a decent demonstration and test the efficiency. Don't expect anything though; my track record is terrible when it comes to getting bored with a project.
 
Last edited by a moderator:
Back
Top