[announce] c64_tools (DSP loader and IPC)


@cloudef: First of all: I have not benchmarked the vertex shader performance of the SGX533 but I would assume that the vertex processor is not highly parallel and processing is rather sequential. There is not any dedicated VRAM to hold vertex buffers so constantly updating the vertices via CPU or DSP should not have a big performance impact (unless the PVR does some costly datatype conversions..?).

Therefore, I could very will imagine that this indeed would be a good application for the DSP.

I have not used matrix palettes in OGLES but -correct me if I'm wrong- what basically needs to be done is to transform each vertex (and normal) by one or more matrices (bones), then add up the resulting vertices using per-bone/per-vertex weights, i.e. something along the lines of:


 outVtx = mat[0]*vtx*weight[0] + mat[1]*vtx*weight[1] .. + mat[n]*vtx*weight[n]
 outNorm = mat[0]*norm*weight[0] + mat[1]*norm*weight[1] .. + mat[n]*norm*weight[n]
 (with mat[] being a uniform and vtx / norm / weight[] being per-vertex attributes)

..right?

Needless to say: For good performance, you will have to use fixed point math (e.g. TI iqmath lib).

Why don't you just give it a try (and report your findings) ?

For further optimizations it probably makes sense to copy the matrices to L1SRAM.

@Steven Craft: "The per vertex part is presumably going to be fastest via a shader" -> this is definitely true for today's hardware..but with the SGX533 I have my doubts. Using the shaders only for lighting (or what else needs to be done) could be beneficial. Maybe it even makes sense to do the lighting part on the DSP as well ?!
 
Hi all,

I've adapted the c64_tools to a custom board, based on TI DM3730. All tests are successful, beside the performance (ARM running @800MHz):

Pandora:


[...] selected testcase 3 ("TC_RPC_ADD_BENCHMARK")
[...] coff_load_overlay: text=0x000005a0 data=0x0000006c bss=0x000010dc
[...] coff_load_overlay: text=0x00000620 data=0x0000006c bss=0x00001114
[...] coff_load_overlay: text=0x00000600 data=0x00000074 bss=0x00001110
[...] starting benchmark
[...] benchmark finished. => 100000 iterations in 2848 millisecs.
Custom board:


[...] selected testcase 3 ("TC_RPC_ADD_BENCHMARK")
[...] coff_load_overlay: text=0x000005a0 data=0x0000006c bss=0x00001118
[...] coff_load_overlay: text=0x00000be0 data=0x00000090 bss=0x0000111c
[...] coff_load_overlay: text=0x00000600 data=0x00000074 bss=0x00001118
[...] starting benchmark
[...] benchmark finished. => 100000 iterations in 39433 millisecs.
It seems the interrupt rate on Linux (3.0.x) is very bad. Any idea?

Thanks for help, SHB
 
hmm... that's quite terrible indeed (more than 10x slower than on the Pandora!)

the RPC_ADD_BENCHMARK uses the DSP polling mode which calls pthread_yield() each third loop iteration (in src/linux/osal/osal_linux.c:eek:sal_yield().

You could try commenting out that line, for testing purposes.

You could also try to disable the polling mode entirely (in tests/c64_tc.c:loc_tc_rpc_add_benchmark() look for the line dsp_poll_enable(S_TRUE); and change that to S_FALSE).
 
*Shameless Dream* If the DSP blitting could be used for exagear with wine to emulate direct draw commands which are slow on gpus...
 
I have not used matrix palettes in OGLES but -correct me if I'm wrong- what basically needs to be done is to transform each vertex (and normal) by one or more matrices (bones), then add up the resulting vertices using per-bone/per-vertex weights, i.e. something along the lines of:


 outVtx = mat[0]*vtx*weight[0] + mat[1]*vtx*weight[1] .. + mat[n]*vtx*weight[n]
 outNorm = mat[0]*norm*weight[0] + mat[1]*norm*weight[1] .. + mat[n]*norm*weight[n]
 (with mat[] being a uniform and vtx / norm / weight[] being per-vertex attributes)


..right?
The complete processing to get an animated, skinned mesh on screen (per frame) is along the lines of:

  1. For each bone (starting at the root):
    Work out the translation/rotation/scale by interpolating between the key frame data (bearing in mind that each bones translation/rotation/scale is stored relative to its parent)
[*]For each vertex:
  1. Transform into bone space (so if vertices are stored in object space, multiply the vertex by the inverse bone to object transform)
  2. Transform by each of the bones matrices
  3. Mix the result (based on the weight for each bone)
  4. Transform back to object space
The the object space vertex is processed 'as normal' so typically multiplied against a world/camera/viewport matrix. In terms of OpenGL matrix palette stuff, I haven't actually used that, the above is what I have used either on consoles (like PS2 where the above is done on one of the vector unit processors), or on PC (where the above is done in a vertex shader).

Anyway, as you have said yourself, the best way to find out the performance of all this stuff is to give it a test. Maybe at some point I'll have a chance to give it a go myself, do a comparison of standard CPU skinned, GPU skinned, DSP skinned and NEON optimized CPU skinned. Find out which way is fastest!
 
Back
Top