You claim that putting the fragment shader directly into the affine_span function makes a huge speed difference. I find this weird as when the fragment shader function is declared inline the compiler will take care of this. Nevertheless I tried it with a simple shader which does texture mapping...
Hi!
I't been a while since I posted my last tutorial and noone had a good idea for follow up tutorial, but I thought I could let you know about some advanced techniques that can be implemented in my software renderer.
Controlling perspective correction and interlacing
The rasterizer class has...
Python on the GP2X? I doubt that will be fast. Porting to the 940 would free the 920 but potentially give a performance penalty when texturing. Porting it to the 940 is also very non trivial (even with the cmd940 framework). You would not be able to specify shaders in the same way. A small layer...
@Exophase: Doing 16bit writes instead of 32bit writes in the screen clear function makes my function take twice as much time (1.6ms).
Wheter 32bit writes will be faster heavily depends on the loop itself though. If you simply write a fixed value and do not do anything else in the loop it will...
How many milliseconds would a blitter clear of a 320x240 pixel area in 16 bit color take?
A simple for loop with 32bit writes (and not 16bit!) over the screen area takes me ~0.8 ms (not too slow I think). Is the blitter faster and if yes by how much?
Weird. There should not be such a big fps difference between gcc 4.1.2 and 4.2.1. Is the demo compiled as 64bit or 32bit application? I don't know what the default for gcc on a 64bit system is, but I think with -m32 or -m64 you can toggle this. The SDL_BlitSurface and SDL_FillRect calls should...
@efegea: I don't know what you are doing exactly but on Windows there should not be any vsync at all (at least not for my cow demo which you use as base code). What compiler are you using by the way? When I compile my cow demo with VC++ in Release mode I get 73-75fps on Windows Vista AMD...
Oh ok, SDL_flip waits for vsync. That was the bug. I have to use SDL_UpdateRect I think..
EDIT: it's weird, changed it and the framerate is still the same
SDL (at least on Windows) does not give you a hardware surface, so I don't know if they even do vsync with SW_SURFACEs. I circumvented...
Nice video! 75fps seems too low though on a AMD64 3400 as the gp2x is 10-15 times slower. Did you compile with full optimization? How many triangles does the model have? When you want good performance on the gp2x you should get at least 200-250fps on your system.
Have you confirmed that putting the fragment shader source directly into the affine_span function actually makes things faster? I doubt it does so. In the last source fragment you posted you had commented out the depth test which of course would have made it faster but also would not have given...
1. The Cal3D model probably has the texture coordinates stored in a way that suits OpenGL with (0, 0) at the bottom left corner while "normal" images are stored top down and thus have (0,0) in the upper left. The simplest way to fix this is to modify the t (y) texture coordinate y' = 1 - y. This...
Yeah, compile options are very important. I always use -O3 and -funroll-loops and I noticed the open2x devkit with gcc 4.1.1 compiles better than devkitGP2X on Windows with gcc 4.0.2.
The out.w coordinate is the fourth coordinate of a homogenous point in 3D. It is normally computed when you do...
Interesting. This should normally not be the case since the compiler would inline the single_fragment function. Was the single_fragment function declared inline and/or defined inside the fragment shader class? How exactly did you move the function directly into affine_span? Are you sure you...
Hi!
As I promissed here comes the texture mapping tutorial. It will show you how to texture map a single fullscreen quad. Naturally this can be extended to texture map arbitrary geometry.
The first tutorial already showed the basics of how to use my software renderer, so I will only explain...
Well, yes, you have to know more details when you are going to use shaders. This is no different to DX or OpenGL 2 with shaders but shaders give you all the power and control you can get. DX10 for instance is shaders only and new engines on the PC are also shaders only. GL3 will also be shaders...
Hello guys!
This post will be a short intermission before I will do a texturing tutorial and will show you some nice features of my renderer which you may be able to exploit if you have some creative ideas.
The last tutorial showed how to render a simple triangle where the rgb colors were...
Yes, you would need to model and projection matrices to transform the incomming vertices into clip space. If you want to see how this is done take a look at the cow demo from my homepage.
The vertex and pixel shaders may be a bit hard to grasp in the beginning, but after you get how they work it...
no, mine does not have an "immediate mode", but you can simply build a list of the vertices you want to send and a dummy index buffer with (1, 2, 3, 4, ...) in it.
Hi guys!
I have been developing my software renderer for quite some time now and also had two threads about it on this board. In the last couple of months I optimized and refactored the renderer quite a lot, but now the interface should be mostly stable.
I thought about the possibility of...
Finally got it to work on Fedora 8 running in VMWare. Unfortunately Vista was preinstalled on my laptop and I don't intend to replace it (acer drivers only available for vista). At least I can try out DX10 coding this way.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.