GP2X My Software Triangle Rasterizer


Trenki

Member
Joined
Jun 15, 2006
Messages
114
Age
41
Location
South Tyrol, Italy
Website
www.trenki.net
Hi all!

Last weekend I took the time to code a class which can rasterize triangles. The code is kept quite general for maximum extensibility.
I used various pieces of code from the web as a basis such as the code from devmaster and filled in the missing parts.

- The rasterizer can support an arbitrary number of render targets (you will most likely use two, a color buffer and a depth buffer).
- It can be configured with a "PixelShader" which does the actual hard work of shading each pixel.
- It does only use integer math.
- It interpolates an arbitrary number of integer variables (this means you can use fixed point here) across the triangle. This is done perspectively correct for the corners of each 8x8 block and linearly within the block which saves cpu cycles.

The only problem I can see right now is with small or large but thin triangles. Because the rasterizer is tile based it must at least scan a whole 8x8 block and test each pixel if it is inside the triangle or not.
Large triangles are handled quite efficiently since for the inner part only the corners are tested for inout.
What do you think about this.

The code is ~500 lines of C++ long, so I don't post it here unless requested otherwise. If you would like to take a look at it please email me at: trenki2 - at - gmx.net

I've now uploaded a vastly improved version of the renderer to my new domain www.trenki.net. Check it out if you are interested.
 
Hey with something as useful as this, feel free to upload it to gp32x.de and let everyone download and improve the source (if applicable)

Have you made any timings yet ?
 
Lazrhog said:
Hey with something as useful as this, feel free to upload it to gp32x.de and let everyone download and improve the source (if applicable)

Have you made any timings yet ?
Yes, i maybe will do that but first i want to add two features: Clipping rectangle + a means for the pixel shader to compute the mipmap level of detail (this requires derivaties of varying variables).

I don't have timings yet because i didn't test it on the GP2X. And currently I also don't have any plans on testing it because I don't want to install the devkit right now.
But I think it should be quite fast since it is integer only and division by w is only made at the corners of the 8x8 blocks. It might even be faster than Chris Heckers subdividung affine texture mapper since that uses more divisions. At the very least mine is more complete since it supports an arbitrary number of integer varyings + z.
 
Last edited by a moderator:
Trenki said:
Hi all!

I worked on the code a bit more and now it has all the features I wanted to add. I posted the code over at gamedev!



One thing I don't understand in both the original on devmaster and in yours is the fixed point deltas.

CODE
// Deltas
const int DX12 = v1.x - v2.x;
const int DX23 = v2.x - v3.x;
const int DX31 = v3.x - v1.x;

const int DY12 = v1.y - v2.y;
const int DY23 = v2.y - v3.y;
const int DY31 = v3.y - v1.y;

// Fixed-point deltas
const int FDX12 = DX12 << 4;
const int FDX23 = DX23 << 4;
const int FDX31 = DX31 << 4;

const int FDY12 = DY12 << 4;
const int FDY23 = DY23 << 4;
const int FDY31 = DY31 << 4;

v1.x and v2.x are already in 28.4 fixed point, giving DX12 also in 28.4 fixed point. Then you go and left-shift DX12 by another 4 bits, meaning that FDX12 is now in 24.8 fixed point, but later on you subtract FDX12 (in 24.8) from CY1 (in 28.4). You cannot just subtract two fixed point values with different precisions without changing one to the same precision as the other.

Or am I going completely insane?
 
Last edited by a moderator:
slygamer said:
One thing I don't understand in both the original on devmaster and in yours is the fixed point deltas.

CODE
*** code snipped ***

v1.x and v2.x are already in 28.4 fixed point, giving DX12 also in 28.4 fixed point. Then you go and left-shift DX12 by another 4 bits, meaning that FDX12 is now in 24.8 fixed point, but later on you subtract FDX12 (in 24.8) from CY1 (in 28.4). You cannot just subtract two fixed point values with different precisions without changing one to the same precision as the other.

Or am I going completely insane?
Good question, you really looked deep into the code.
The term "fixed point" deltas might be a bit misleading since the deltas are in fixed point themselfs, but the "fixed point deltas" are converted to 24.8 by the shift because the C1, C2, C3, CY1, CY2, CY3 variables are also in 24.8 fixed point. This can be seen looking at the following code:

CODE

// Half-edge constants
int C1 = DY12 * X1 - DX12 * Y1;
int C2 = DY23 * X2 - DX23 * Y2;
int C3 = DY31 * X3 - DX31 * Y3;



Since DY12 and X1 are both in 28.4 the multiplication yields a 24.8 fixed point value.
 
Last edited by a moderator:
Ahh. Ok. That makes sense now. I just didn't look deep enough. :)

That's why using 'int' directly for a fixed point variable is so confusing. It is sometimes not clear what precision it is using. I am making my own fixed point class and converting the original devmaster code to use the fixed point class, primarily for my own satisfaction. I'm also going to try adding texture support. Should be interesting.
 
I now compiled a simple test application featuring a single cube with gouraud shading. I get from 19-23 fps on the GP2X. Now in my eyes this seems terribly slow for a single cube which only covers roughtly 1/9 of the screen area (and this is compiled with -O3).
I wanted to look at the profile and tried to compile and link with -pg -lgmon. The first problem was that gmon could not be found. Compiling with -pg alone produced an executable but that immediately segfaulted, so no profile this time.

At this point I am rather discouraged at continuing my work since this minimal test already shows terrible performance.
 
Which toolchain are you using? If you aren't using the HW accelerated SDL (if you are using SDL) you could be in a world of pain. 3D is possible I think, it just takes some tricky programming.
 
I used the devkitGP2X to compile and the hardware accelerated SDL with 21MB surface memory from the link in this forum. I just copied the libSDL.a file into the C:\devkitGP2X\lib replacing the old 1MB file with the new ~7MB file. So SDL should be setup correctly.

I don't have a clue why it is so terribly slow. Quake1 runs smooth and it is a complete game not just a test program.
 
I've been playing with the rasteriser code using my own direct-to-hardware framework, i.e. not SDL. I've made several improvements to the original code, such as writing two pixels at a time (it's quicker to write one unsigned int than two unsigned shorts), unrolling loops, looping from high-1 to zero, but they do not seem to have made much difference in my simple test app. Perhaps when I start rendering hundreds of triangles each frame I would notice a difference.

My next step was to try adding vertex colours and/or textures. Then move the rasteriser to the 940T. Should be fun. :)
 
Other tricks you might want to consider, extracted from gpu940 :

- use the stm instruction to write several pixels at once (faster writes) ;
- don't divide (linear interpolation is OK for most polys) ;
- if you have to divide, cache some results (some like DX/DY are reused from poly to poly) ;
- keep your instructions count minimal, even after a write/read no cycle is free ;
- when multiplying, be aware of the early termination : order the registers right ;
- consider generating your rendering loop to according to how many parameters you really need.

Happy hacking !
 
@slygamer: Maybe you should consider using both CPUs for rendering. One for the top and the other for the bottom half, or one for the left and one for the right half.

@hmw: Are you also doing tiled rendering in Vincent? And how are you calculating the texture level of detail?
A scanline based renderer which interpolates the values along the edges might be better suited for the GP2X but then I have the problem of calculating the mipmap level because I don't have the derivatives of the texture coordinates!?

@rixed: Maybe I should consider a complete rewrite and only do linear interpolation. I then could add code to detect if the distortion caused by linear interpolation is greated than a threshold value and in that case subdivide the triangle and render the smaller ones which now should have less distortion.
 
Trenki said:
@hmw: Are you also doing tiled rendering in Vincent? And how are you calculating the texture level of detail?
A scanline based renderer which interpolates the values along the edges might be better suited for the GP2X but then I have the problem of calculating the mipmap level because I don't have the derivatives of the texture coordinates!?
Yes, Vincent is also using the block-based approach; by default it uses 4*4 tiles. If you look into the source code linked from my post above, there is a comment "perform Mipmap calculation".

I don't think the interpolation is the problem, that's actually the main advantage of this approach; pixel to pixel within a block it's incrementing the current values in either case, and the delta can be computed quickly using a right shift. However, evaluating the three inequalities is somewhat more expensive than just doing a scan based on edge buffers. So combining the two approaches will probably yield the best results.

As rixed pointed out, you can gain a lot in either case if you start dropping quality requirements (no perspective correct interpolation, re-use of deltas etc.) However, as a poster pointed out in your other thread on devmaster, ARM is significantly limited by the memory bandwidth, so aggregations of loads and stores etc. have less impact than you'd expect; for more complex scenes, you'd probably gain more from appropriately tiling the screen and scene to have texture, depth and color buffers fit into your data cache than anything else when building a renderer. Similarly, you have to be careful with unrolling of loops, because you need to make sure that the whole rendering pipeline still fits into the instruction cache, and the instruction density is pretty low. This will be particular important if you consider moving the code to the 940. But rixed has probably more experience in this regard.

- HM
 
Last edited by a moderator:
I now was able to speed things up considerably. The bottleneck was the (int64) division performend by the solve_plane and fixdiv<16> function. W is now computed by inverting 1/w using only a 32bit unsigned division.
For the other parameters i compute step values from the plane equation once and reuse them. So i only need roughly 6 adds and 2 multiplies for all 4 corners instead of the solve_plane which did 3 adds 2 multiplies and 1 division (all in 64bit) for each parameter.

Previously I got 20-27fps now i get at least 27-35fps on the GP2X! I tried with different block sizes. On the GP2X 4x4, 8x8 and 16x16 didn't show any big difference but on the PC 8x8 performed far better than the other two at the same resolution (320x240).

Profiling on the PC also revealed that the division was a major bottleneck. There the __divdi3 function consumed 12%-16% of the performance. Now its only 0.6% left.

So the perspective correction thing isn't that bad as I previously thought. Still a scanline based renderer which does only linear interpolation might be faster. At this point I can only guess how much faster it might be and whether it is worth implementing right now or not.
And there is also the problem with the depth testing. With a scanline based rasterizer you have no simple way of doing a early depth rejection. Using blocks you can avoid the interpolation of all the parameters if you find the block to be completely hidden. I think what will perform better strongly depends on the scenes depth complexity. Block based might be better for high depth complexity (e.g in indoor screens) and scanline based for scenes with low depth complexity (e.g. space shooter).

So now I am at a point where I cant decide which route to go next.
 
I did some more testing and compared my rasterizer drawing a fullscreen quad to a memset. Memset gives 80fps 2x memset 40fps and my rasterizer 20pf. The rasterizer always interpolates z and so writing z is for free I think.

I compared this also to the base code from devmaster which olny fills the triangle with a constant color and that also gives 80fps. So I think i will write a rasterizer which does not use perspective correction.

What I still don't know is wheter I should take the block based approach or going the scanline based way.
 
I'm posting an update again in the hope that somebody is actually reading this.

I was again able to speed things up by manually unrolling the loop which steps the varyings and the loop which steps over the render targets. Now the same cube demo runs at a constant 40 fps which is twice as fast as my first implementation.

I also used some C++ template magic and member function pointers. Now the vertex and fragment shaders are ordinary functions which i specify as a template parameter so they will be inlined! On the PC this gave an additional 25% speed gain.
On the GP2X it didn't help anything. The demo runs still at a constant 40fps with occasional spikes to 50, 60 or even 78 fps!? I don't know how to explain them.

The only problem with the template functions is that I need to define them in the header files together with all needed support functions. This does pollute the namespace quite a bit. Can anyone suggest an elegant solution to this problem?

Update: Found out that the hardware accelerated SDL waits for vsync. Changing the LCD settings in the gp2xmenu could therefore affect the framerate. Now I create the screen surface without a double buffer and render into another SDL surface and do a Blit before calling SDL_FlipRect. This made things even faster. The version which uses virtual functions gets ~54fps, the templatized version 65-75fps. So now I'm happy! B)
 
Trenki said:
What I still don't know is wheter I should take the block based approach or going the scanline based way.
A block based renderer may be affected by the fact that memory writes are slower when far away (when not in the same "bank", I would say).

Refer to this thread on that topic :

http://www.gp32x.de/board/index.php?showtopic=32088

Requires testing to be sure, though.
 
Last edited by a moderator:
Back
Top