Trenki
Member
Hi guys!
I have been developing my software renderer for quite some time now and also had two threads about it on this board. In the last couple of months I optimized and refactored the renderer quite a lot, but now the interface should be mostly stable.
I thought about the possibility of making a 3D game/demo contest for it to boost it's popularity but I don't know if that will ever happen. In any case I thought it would be good to provide some tutorial material, so that it would be easier for new users to actually use my render for something good.
So I will present a first and very simple tutorial which shows how to use my software renderer here on this forum.
But first I will have to brag about the features my software renderer supports:
I will be using SDL together with my renderer but this is no requirement. The renderer is completly independent of it. We also have to include some headers for our software renderer to work with it. All the API of the renderer is located in the swr namespace.
CODE
#include "renderer/geometry_processor.h"
#include "renderer/rasterizer_subdivaffine.h"
#include "renderer/span.h"
// the software renderer stuff is located in the namespace "swr" so include
// that here
using namespace swr;
Next I will show you the application's main function which shows hot to draw a single colored triangle. After that I will fill in the blanks and show you how the vertex and fragment shaders have to be written for the final application to actually compile.
First we simply initialize SDL with a 16 bit color buffer.
CODE
int main(int ac, char *av[]) {
// Intialize SDL without error handling at all
SDL_Init(SDL_INIT_VIDEO);
SDL_Surface *screen = SDL_SetVideoMode(640, 480, 16, 0);
Next we define the three vertices of our triangle together with its colors. I still use floats in this minimal example but in a real application you would use fixed point values to specify the vertex coordinates. How the Vertex structure actually looks you will se later.
CODE
// The three vertices of the triangle and the colors
Vertex vertices[] = {
{0.0f, 0.5f, 255, 0, 0},
{-.5f, -.5f, 0, 255, 0},
{0.5f, -.5f, 0, 0, 255},
};
We also need indices for out triangle.
CODE
// The indices we need for rendering
unsigned indices[] = {0, 1, 2};
After this we initialize the renderer, so that we can actually use it. This involves creating a Rasterizer subclass and a GeometryProcessor class which will be configured with the rasterizer. You always have to set the viewport and the clipping rectangle to valid regions.
CODE
// Create a rasterizer class that will be used to rasterize primitives
RasterizerSubdivAffine r;
// Create a geometry processor class used to feed vertex data.
GeometryProcessor g(&r);
// It is necessary to set the viewport
g.viewport(0, 0, screen->w, screen->h);
// Set the cull mode (CW is already the default mode)
g.cull_mode(GeometryProcessor::CULL_CW);
// It is also necessary to set the clipping rectangle
r.clip_rect(0, 0, screen->w, screen->h);
Before actually beeing able to render anything the vertex and fragment shaders have to be set. The following code does this. As you can see you set the vertex shader on the GeometryProcessor while you set the fragment shader on the Rasterizer subclass.
CODE
// Set the vertex and fragment shaders
g.vertex_shader<VertexShader>();
r.fragment_shader<FragmentShader>();
Next we have to tell the GeometryPipeline where out vertex data lies in memory and how it is laid out (just like in OpenGL with glVertexPointer). You can have more than just one vertex attribute pointer, but in this example a single one is enough.
CODE
// Specify where out data lies in memory
g.vertex_attrib_pointer(0, sizeof(Vertex), vertices);
The rest of the main function simply calls the draw_triangle function passing the indices and then displays the result to the screen.
CODE
// draw the triangle
g.draw_triangles(3, indices);
// Show everything on screen
SDL_Flip(SDL_GetVideoSurface());
// Wait for the user closing the application
SDL_Event e;
while (SDL_WaitEvent(&e) && e.type != SDL_QUIT);
// Quit SDL
SDL_Quit();
return 0;
}
This was all for the main function. The most work was the initialization, but that is normally only required once in an application. Next we will have a look at the Vertex structure, the VertexShader and the FragmentShader classes.
The Vertex Structure is a POD and just holds the data we associate to a single vertex.
CODE
// Our vertex structure which will be used to store our triangle data.
struct Vertex {
float x, y;
int r, g, b;
};
The vertex shader is a bit more complex. You can see the source for the whole vertex shader below. There are two static const fields which tell the renderer how many attribute streams the vertex shader will use and how many varyings (OpenGL term) it will output into the pipeline. If you output only a 2D texture coordinate that value would have to be set to 2. This value will be used by the clipping stage to do interpolation if necessary and if you set the wrong value you might get strange artifacts when clipping is beeing done. The only thing left is the static!! shade function in the vertex shader. It takes the vertex input (which is an array if attribute stream pointers) and has to write to the out variable. The stream pointers have void* as type, so they need to be casted to the correct type before they can be used.
The shader always has to write the x, y, z and w variables in the out structure. These are integers and interpreted as 16.16 fixed point numbers. Therefore the float values from the input structure are converted into fixed point before their value is assigned to the out variables.
In this example we interpolate three color values (r, g, b ). The values in the vertex structure are in the [0,255] range. These values are shifted left by 16 bits before beeing written to the output structure since this improves the accuracy by which they are interpolated (the lower bits are not interpolated very precisely)
CODE
// This is the vertex shader which is executed for each individial vertex that
// needs to ne processed.
struct VertexShader {
// This specifies that this shader is only going to use 1 vertex attribute
// array. There you be used up to Renderer::MAX_ATTRIBUTES arrays.
static const unsigned attribute_count = 1;
// This specifies the number of varyings the shader will output. This is
// for instance used when clipping.
static const unsigned varying_count = 3;
// This static function is called for each vertex to be processed.
// "in" is an array of void* pointers with the location of the individial
// vertex attributes. The "out" structure has to be written to.
static void shade(const GeometryProcessor::VertexInput in, GeometryProcessor::VertexOutput &out)
{
// cast the first attribute array to the input vertex type
Vertex &v = *static_cast<Vertex*>(in[0]);
// x, y, z and w are the components that must be written by the vertex
// shader. They all have to be specified in 16.16 fixed point format.
out.x = static_cast<int>((v.x * (1 << 16)));
out.y = static_cast<int>((v.y * (1 << 16)));
out.z = 0;
out.w = 1 << 16;
// The vertexoutput can have up to Rasterizer::MAX_VARYING varying
// parameters. These are just integer values which will be interpolated
// across the primitives. The higher bits of these integers will be
// interpolated more precicely so the values [0, 255] are shifted left.
out.varyings[0] = v.r << 16;
out.varyings[1] = v.g << 16;
out.varyings[2] = v.b << 16;
}
};
The last thing to show to make this tutorial complete is the FragmentShader class. Below is the whole fragment shader. The fragment shader has to derive from a span drawer class (defined in renderer/span.h). The following example only shows a specialized span drawer which works on a 16 bit color and depth buffer. There is also a more generic span drawer which you can use with a slightly different interface (look at the full example on my homepage).
The static const bool at the beginning tell how many varyings variables the fragment shader will use and whether z should be interpolated or not. In out example we do not need a depth buffer, therefore we also do not need depth to be interpolated.
The begin_triangle function provides is callback function which will be called for each triangle to be rasterized. The example does not use it but one could compute e.g. the mipmap factor on a per triangle basis in this function. The function still has to be defined and needs to be static.
The next four static const bools define what the single_fragment function will be able to do. The names should be self explanatory.
The single_fragment function is the core of the FragmentShader class and will be called for each pixel. In this example it simply computes a color value to be output and writes it to the color variable. If you had a depth buffer and wanted to do depth testing you would have to implement it in this function.
In the fragment shaders the interpolated color values are clamped to make sure the range is correct, since interpolation can make them go out of range even when they were in the proper range in the beginning (was that way the last time I checked. Would have to check again).
The last two *_pointer functions are the functions which the specialized SpanDrawer16BitColorAndDepth requires. These simply have the task to return a pointer to the pixel with x, y coordinates in the color or depth buffer respectively.
CODE
// This is the fragment shader
struct FragmentShader : public SpanDrawer16BitColorAndDepth<FragmentShader> {
// varying_count = 3 tells the rasterizer that it only needs to interpolate
// three varying values (the r, g and b in this context).
static const unsigned varying_count = 3;
// We don't need to interpolate z in this example
static const bool interpolate_z = false;
// Per triangle callback. This could for instance be used to select the
// mipmap level of detail. We don't need it but it still needs to be defined
// for everything to work.
static void begin_triangle(
const IRasterizer::Vertex& v1,
const IRasterizer::Vertex& v2,
const IRasterizer::Vertex& v3,
int area2)
{}
static void single_fragment(const IRasterizer::FragmentData &fd, unsigned short &color, unsigned short &depth)
{
SDL_Surface *screen = SDL_GetVideoSurface();
// Convert from 16.16 color format to [0,255]
// Here the colors are clamped to the range[0,255]. If this is not done
// here we can get very small artifacts at the edges.
int r = std::min(std::max(fd.varyings[0] >> 16, 0), 255);
int g = std::min(std::max(fd.varyings[1] >> 16, 0), 255);
int b = std::min(std::max(fd.varyings[2] >> 16, 0), 255);
color = SDL_MapRGB(screen->format, r, g, b);
}
// this is called by the span drawing function to get the location of the color buffer
static void* color_pointer(int x, int y)
{
SDL_Surface *screen = SDL_GetVideoSurface();
return static_cast<unsigned short*>(screen->pixels) + x + y * screen->w;
}
// this is called by the span drawing function to get the location of the depth buffer
static void* depth_pointer(int x, int y)
{
// We don't use a depth buffer
return 0;
}
};
Ok, this was the whole tutorial and should have shown you the very basics of my software renderer.
I would be happy to get some feedback for this and I also encourage you to go to my website and download the renderer and the example pack. It contains the whole sorce of the above example and also shows the GenericSpanDrawer. There is also a demo which shows how to render a large (5800 triangles) model and it works with 19-20fps. Finally I also have put the improved source and executables for my GBAX 2007 demo on my website for you to check out.
Depending on the feedback I may do another tutorial, maybe showing how to do texturing or something else if someone has any ideas.
Trenki's Programming Page.
I have been developing my software renderer for quite some time now and also had two threads about it on this board. In the last couple of months I optimized and refactored the renderer quite a lot, but now the interface should be mostly stable.
I thought about the possibility of making a 3D game/demo contest for it to boost it's popularity but I don't know if that will ever happen. In any case I thought it would be good to provide some tutorial material, so that it would be easier for new users to actually use my render for something good.
So I will present a first and very simple tutorial which shows how to use my software renderer here on this forum.
But first I will have to brag about the features my software renderer supports:
- Written in pure C++, so it should compile without problems on many platforms. For me it works on the GP2X and on the PC without any code changes. I also tried to compile it for the 940 once and the compiler didn't complain.
- The renderer is very modular and provide many places where you can customize it's behavior by simply using different classes and/or overriding the functions. Where possible compile time polymorphism is used for this, so there is no virtual function call overhead in these cases.
- The source code is very compact (only ~2200 lines of C++ code including comments)
- It only uses fixed point arithmetic internally for maximum speed.
- You can implement nearly any imaginable effect, since it supports vertex and pixel shaders written in C++ and the engine allows to interpolate an arbitrary number of varying variables per triangle (color, texcoords, etc.). You could easily do per pixel lighting or use multiple render targets if you wanted. The shaders actually are the most powerful feature in the renderer since it allows you to implement your own specialized rasterizers.
- There are some optimization functions which allow to tune the perspective correction quality or turn it off and it is also possible to do progressive interlacing which can cut the per pixel cost per frame in half.
I will be using SDL together with my renderer but this is no requirement. The renderer is completly independent of it. We also have to include some headers for our software renderer to work with it. All the API of the renderer is located in the swr namespace.
CODE
#include "renderer/geometry_processor.h"
#include "renderer/rasterizer_subdivaffine.h"
#include "renderer/span.h"
// the software renderer stuff is located in the namespace "swr" so include
// that here
using namespace swr;
Next I will show you the application's main function which shows hot to draw a single colored triangle. After that I will fill in the blanks and show you how the vertex and fragment shaders have to be written for the final application to actually compile.
First we simply initialize SDL with a 16 bit color buffer.
CODE
int main(int ac, char *av[]) {
// Intialize SDL without error handling at all
SDL_Init(SDL_INIT_VIDEO);
SDL_Surface *screen = SDL_SetVideoMode(640, 480, 16, 0);
Next we define the three vertices of our triangle together with its colors. I still use floats in this minimal example but in a real application you would use fixed point values to specify the vertex coordinates. How the Vertex structure actually looks you will se later.
CODE
// The three vertices of the triangle and the colors
Vertex vertices[] = {
{0.0f, 0.5f, 255, 0, 0},
{-.5f, -.5f, 0, 255, 0},
{0.5f, -.5f, 0, 0, 255},
};
We also need indices for out triangle.
CODE
// The indices we need for rendering
unsigned indices[] = {0, 1, 2};
After this we initialize the renderer, so that we can actually use it. This involves creating a Rasterizer subclass and a GeometryProcessor class which will be configured with the rasterizer. You always have to set the viewport and the clipping rectangle to valid regions.
CODE
// Create a rasterizer class that will be used to rasterize primitives
RasterizerSubdivAffine r;
// Create a geometry processor class used to feed vertex data.
GeometryProcessor g(&r);
// It is necessary to set the viewport
g.viewport(0, 0, screen->w, screen->h);
// Set the cull mode (CW is already the default mode)
g.cull_mode(GeometryProcessor::CULL_CW);
// It is also necessary to set the clipping rectangle
r.clip_rect(0, 0, screen->w, screen->h);
Before actually beeing able to render anything the vertex and fragment shaders have to be set. The following code does this. As you can see you set the vertex shader on the GeometryProcessor while you set the fragment shader on the Rasterizer subclass.
CODE
// Set the vertex and fragment shaders
g.vertex_shader<VertexShader>();
r.fragment_shader<FragmentShader>();
Next we have to tell the GeometryPipeline where out vertex data lies in memory and how it is laid out (just like in OpenGL with glVertexPointer). You can have more than just one vertex attribute pointer, but in this example a single one is enough.
CODE
// Specify where out data lies in memory
g.vertex_attrib_pointer(0, sizeof(Vertex), vertices);
The rest of the main function simply calls the draw_triangle function passing the indices and then displays the result to the screen.
CODE
// draw the triangle
g.draw_triangles(3, indices);
// Show everything on screen
SDL_Flip(SDL_GetVideoSurface());
// Wait for the user closing the application
SDL_Event e;
while (SDL_WaitEvent(&e) && e.type != SDL_QUIT);
// Quit SDL
SDL_Quit();
return 0;
}
This was all for the main function. The most work was the initialization, but that is normally only required once in an application. Next we will have a look at the Vertex structure, the VertexShader and the FragmentShader classes.
The Vertex Structure is a POD and just holds the data we associate to a single vertex.
CODE
// Our vertex structure which will be used to store our triangle data.
struct Vertex {
float x, y;
int r, g, b;
};
The vertex shader is a bit more complex. You can see the source for the whole vertex shader below. There are two static const fields which tell the renderer how many attribute streams the vertex shader will use and how many varyings (OpenGL term) it will output into the pipeline. If you output only a 2D texture coordinate that value would have to be set to 2. This value will be used by the clipping stage to do interpolation if necessary and if you set the wrong value you might get strange artifacts when clipping is beeing done. The only thing left is the static!! shade function in the vertex shader. It takes the vertex input (which is an array if attribute stream pointers) and has to write to the out variable. The stream pointers have void* as type, so they need to be casted to the correct type before they can be used.
The shader always has to write the x, y, z and w variables in the out structure. These are integers and interpreted as 16.16 fixed point numbers. Therefore the float values from the input structure are converted into fixed point before their value is assigned to the out variables.
In this example we interpolate three color values (r, g, b ). The values in the vertex structure are in the [0,255] range. These values are shifted left by 16 bits before beeing written to the output structure since this improves the accuracy by which they are interpolated (the lower bits are not interpolated very precisely)
CODE
// This is the vertex shader which is executed for each individial vertex that
// needs to ne processed.
struct VertexShader {
// This specifies that this shader is only going to use 1 vertex attribute
// array. There you be used up to Renderer::MAX_ATTRIBUTES arrays.
static const unsigned attribute_count = 1;
// This specifies the number of varyings the shader will output. This is
// for instance used when clipping.
static const unsigned varying_count = 3;
// This static function is called for each vertex to be processed.
// "in" is an array of void* pointers with the location of the individial
// vertex attributes. The "out" structure has to be written to.
static void shade(const GeometryProcessor::VertexInput in, GeometryProcessor::VertexOutput &out)
{
// cast the first attribute array to the input vertex type
Vertex &v = *static_cast<Vertex*>(in[0]);
// x, y, z and w are the components that must be written by the vertex
// shader. They all have to be specified in 16.16 fixed point format.
out.x = static_cast<int>((v.x * (1 << 16)));
out.y = static_cast<int>((v.y * (1 << 16)));
out.z = 0;
out.w = 1 << 16;
// The vertexoutput can have up to Rasterizer::MAX_VARYING varying
// parameters. These are just integer values which will be interpolated
// across the primitives. The higher bits of these integers will be
// interpolated more precicely so the values [0, 255] are shifted left.
out.varyings[0] = v.r << 16;
out.varyings[1] = v.g << 16;
out.varyings[2] = v.b << 16;
}
};
The last thing to show to make this tutorial complete is the FragmentShader class. Below is the whole fragment shader. The fragment shader has to derive from a span drawer class (defined in renderer/span.h). The following example only shows a specialized span drawer which works on a 16 bit color and depth buffer. There is also a more generic span drawer which you can use with a slightly different interface (look at the full example on my homepage).
The static const bool at the beginning tell how many varyings variables the fragment shader will use and whether z should be interpolated or not. In out example we do not need a depth buffer, therefore we also do not need depth to be interpolated.
The begin_triangle function provides is callback function which will be called for each triangle to be rasterized. The example does not use it but one could compute e.g. the mipmap factor on a per triangle basis in this function. The function still has to be defined and needs to be static.
The next four static const bools define what the single_fragment function will be able to do. The names should be self explanatory.
The single_fragment function is the core of the FragmentShader class and will be called for each pixel. In this example it simply computes a color value to be output and writes it to the color variable. If you had a depth buffer and wanted to do depth testing you would have to implement it in this function.
In the fragment shaders the interpolated color values are clamped to make sure the range is correct, since interpolation can make them go out of range even when they were in the proper range in the beginning (was that way the last time I checked. Would have to check again).
The last two *_pointer functions are the functions which the specialized SpanDrawer16BitColorAndDepth requires. These simply have the task to return a pointer to the pixel with x, y coordinates in the color or depth buffer respectively.
CODE
// This is the fragment shader
struct FragmentShader : public SpanDrawer16BitColorAndDepth<FragmentShader> {
// varying_count = 3 tells the rasterizer that it only needs to interpolate
// three varying values (the r, g and b in this context).
static const unsigned varying_count = 3;
// We don't need to interpolate z in this example
static const bool interpolate_z = false;
// Per triangle callback. This could for instance be used to select the
// mipmap level of detail. We don't need it but it still needs to be defined
// for everything to work.
static void begin_triangle(
const IRasterizer::Vertex& v1,
const IRasterizer::Vertex& v2,
const IRasterizer::Vertex& v3,
int area2)
{}
static void single_fragment(const IRasterizer::FragmentData &fd, unsigned short &color, unsigned short &depth)
{
SDL_Surface *screen = SDL_GetVideoSurface();
// Convert from 16.16 color format to [0,255]
// Here the colors are clamped to the range[0,255]. If this is not done
// here we can get very small artifacts at the edges.
int r = std::min(std::max(fd.varyings[0] >> 16, 0), 255);
int g = std::min(std::max(fd.varyings[1] >> 16, 0), 255);
int b = std::min(std::max(fd.varyings[2] >> 16, 0), 255);
color = SDL_MapRGB(screen->format, r, g, b);
}
// this is called by the span drawing function to get the location of the color buffer
static void* color_pointer(int x, int y)
{
SDL_Surface *screen = SDL_GetVideoSurface();
return static_cast<unsigned short*>(screen->pixels) + x + y * screen->w;
}
// this is called by the span drawing function to get the location of the depth buffer
static void* depth_pointer(int x, int y)
{
// We don't use a depth buffer
return 0;
}
};
Ok, this was the whole tutorial and should have shown you the very basics of my software renderer.
I would be happy to get some feedback for this and I also encourage you to go to my website and download the renderer and the example pack. It contains the whole sorce of the above example and also shows the GenericSpanDrawer. There is also a demo which shows how to render a large (5800 triangles) model and it works with 19-20fps. Finally I also have put the improved source and executables for my GBAX 2007 demo on my website for you to check out.
Depending on the feedback I may do another tutorial, maybe showing how to do texturing or something else if someone has any ideas.
Trenki's Programming Page.