sparrow3D - multi platform game engine


I'll probably rework space rocks to use this for C4A instead of the spaghetti client if I ever touch it again :p

At least any future game I make with C4A will :)
 
PokeParadox gave me maintainer permissions on Spout, which I always wanted to add C4A too, might look at doing so.

Could somebody set up a Wiki page describing how you go about adding a game to C4A these days? Been a while since I touched anything to do with it
 
Write skeezix an PM. ;) AFAIK this is the only way for the next two month™.
 
Notaz told me, my Makefile is "a mess".

He was right.

So I improved it and made it more standard conform. Unfortunately, if you depend on my cross compiling system with make TARGET=toaster, it will fail now. You have to make these changes to your Makefile

  • The line "CPP = gcc <some other stuff>" has to be renamed to "FLAGS = <some other stuff>". So remove the gcc and rename it to FLAGS. The compiler is set automatically now.
  • After the mk-include block, before the "all:" you have to add "CFLAGS += $(FLAGS) $(PARAMETER)".
  • Every occurrence of $(CPP) has to be renamed to $(CC). ;)
Thanks for your understanding and traveling with the Deutsche Bahn.Greetings, Ziz
 
Ziz, I'm getting spNetC4AGetStatus() != SP_C4A_OK after trying to commit a score (and waiting until the status is no longer PROGRESS), but the score is still uploaded to C4A. Ideas?
 
Ziz, I'm getting spNetC4AGetStatus() != SP_C4A_OK after trying to commit a score (and waiting until the status is no longer PROGRESS), but the score is still uploaded to C4A. Ideas?
This is not very much information. Did you set any other flags like spNetC4ASetCaching?Can I see some code?

Or could you tell me, what exactly spNetC4AGetStatus() does return? I wonder, whether it is SP_C4A_ERROR, SP_C4A_TIMEOUT or maybe even SP_C4A_CANCELED.

Keep in mind if you try to send a score with caching enable and the sending fails, it will return an error, but cache the score anyway!
 
Here's the relevant part of the code

I have not set caching. I use my own submitted handling as a nice side effect of handling not sending the default high scores :)

If I understand correctly, the status on line 34 should be SP_C4A_OK if the commit succeeds. Anything else would mean the score was not sent successfully. Is this a correct assumption?

EDIT: Oh, and I didn't check the actual return value yet in hopes of figuring out the real culprit without sending yet another garbage result to C4A ;)
 
Last edited by a moderator:
Here's the relevant part of the code

I have not set caching. I use my own submitted handling as a nice side effect of handling not sending the default high scores :)

If I understand correctly, the status on line 34 should be SP_C4A_OK if the commit succeeds. Anything else would mean the score was not sent successfully. Is this a correct assumption?

EDIT: Oh, and I didn't check the actual return value yet in hopes of figuring out the real culprit without sending yet another garbage result to C4A ;)
Please have a deeeeep look at this line:

https://gist.github.com/bzar/faf0580bbeecd6c8233e#file-gistfile1-cpp-L34

:D

If you don't find it, here the solution. However it will make more fun to find it yourself.

Remove the ;
 
Last edited by a moderator:
Now THAT's  funny! :D
Something else:Do you build sparrowNet on your own?

If yes: How? It is important to include "-DPANDORA" in the built process of sparrowNet.o .
 
Yes, I only build sparrowNet as a part of space rocks' external libraries cmake phase. I use -DPANDORA for space rocks anyway, so that part is covered. I skimmed your makefile to see what I need to take into account.
 
Yes, I only build sparrowNet as a part of space rocks' external libraries cmake phase. I use -DPANDORA for space rocks anyway, so that part is covered. I skimmed your makefile to see what I need to take into account.
Perfect!
 
Saw this commit message "Let's use NEON they said... it will be fast they said..."

Guess that didn't work out so well? ;) I think all the packing and unpacking you have to do for where you used it really works against it. Especially if it involves moving from a vector field into a scalar register, that is very slow.
 
Saw this commit message "Let's use NEON they said... it will be fast they said..."

Guess that didn't work out so well? ;) I think all the packing and unpacking you have to do for where you used it really works against it. Especially if it involves moving from a vector field into a scalar register, that is very slow.
No, in fact I did just define a NEON vector instead of 3 or 4 scalar values for x,y,z (and w), which I used for pretty much. It made glitches and wasn't faster.Maybe I will try some time later again. Maybe. :D
 
No, in fact I did just define a NEON vector instead of 3 or 4 scalar values for x,y,z (and w), which I used for pretty much. It made glitches and wasn't faster.

Maybe I will try some time later again. Maybe. :D
I know, what I'm saying is that you were implicitly immediately moving the results from the vector register to scalar registers which has a big performance penalty.

The way to get good performance with NEON is to split the processing into multiple stages that each work on several pixels at a time.
 
No, in fact I did just define a NEON vector instead of 3 or 4 scalar values for x,y,z (and w), which I used for pretty much. It made glitches and wasn't faster.

Maybe I will try some time later again. Maybe. :D
I know, what I'm saying is that you were implicitly immediately moving the results from the vector register to scalar registers which has a big performance penalty.

The way to get good performance with NEON is to split the processing into multiple stages that each work on several pixels at a time.
So like a SIMD or better SPMD (Single Process Multi Data) Archtitecture? Interesting idea. Have to look deeper at this. Sounds interesting, would be nice, to get my software render a BIT faster. ;)
 
So like a SIMD or better SPMD (Single Process Multi Data) Archtitecture? Interesting idea. Have to look deeper at this. Sounds interesting, would be nice, to get my software render a BIT faster. ;)
Something like that. There's a lot you could do to get what would likely be significantly better performance, but you'd have to be prepared to rewrite a lot.

An example would be something like this:

1) Load depth samples from depth buffer for the polygon

2) Calculate X/Y coordinates for the pixels in the polygon, store in a buffer (only include ones that aren't clipped)

3) Use X/Y coordinates to calculate interpolated u, v, z, w using gradient values calculated during triangle setup. If you're using perspective correct rendering it makes more sense to first calculate barycentric coordinates then use those to interpolate. It's possible to also calculate barycentric coordinates regardless so you don't need two routines to perform the interpolation.

4) Perform depth test, and either store a mask to use during writeback or compress the pixel stream to remove depth failing pixels

5) Convert u/v to texture addresses/indexes

6) Load texels (this part can't use NEON)

7) Blend texels against blend color

8) Store calculated Z to depth buffer

9) Store texels to depth buffer

Some stages might make more sense to combine, others to separate.. it all depends on how nice the flow fits into a single function. If you do too much stuff in a loop iteration there won't be enough registers, especially if extra registers are needed to unroll the loop to try to hide latency which is often the case with NEON.

You'd probably not want to do this for entire polygons over a particular size, you really want the buffers to fit in L1 cache. So likely no more than a few hundred pixels at a time.

You can add in stages for loading color buffer pixels and performing blending, or alpha testing, etc.

Another thing I'd recommend is tiling the screen so that the current depth buffer and color buffer at least fits in L2 cache. Then write back the color buffer for each tile when you're done. You shouldn't need to write back the depth buffer unless your API exposes it, in which case hopefully you can know ahead of time whether or not it's needed. Tiling does require a scene grabber, so if the users expect to be able to render a small number of polygons then immediately read back the results several times during a frame the tiling will make it a lot slower, or difficult to support at all.

If you do have some form of early-Z testing (or even better, hierarchical Z-testing) it may make sense to do a depth-only pre-pass. Especially if alpha test isn't enabled.
 
So like a SIMD or better SPMD (Single Process Multi Data) Archtitecture? Interesting idea. Have to look deeper at this. Sounds interesting, would be nice, to get my software render a BIT faster. ;)
Something like that. There's a lot you could do to get what would likely be significantly better performance, but you'd have to be prepared to rewrite a lot.
An example would be something like this:

1) Load depth samples from depth buffer for the polygon

2) Calculate X/Y coordinates for the pixels in the polygon, store in a buffer (only include ones that aren't clipped)

3) Use X/Y coordinates to calculate interpolated u, v, z, w using gradient values calculated during triangle setup. If you're using perspective correct rendering it makes more sense to first calculate barycentric coordinates then use those to interpolate. It's possible to also calculate barycentric coordinates regardless so you don't need two routines to perform the interpolation.

4) Perform depth test, and either store a mask to use during writeback or compress the pixel stream to remove depth failing pixels

5) Convert u/v to texture addresses/indexes

6) Load texels (this part can't use NEON)

7) Blend texels against blend color

8) Store calculated Z to depth buffer

9) Store texels to depth buffer

Some stages might make more sense to combine, others to separate.. it all depends on how nice the flow fits into a single function. If you do too much stuff in a loop iteration there won't be enough registers, especially if extra registers are needed to unroll the loop to try to hide latency which is often the case with NEON.

You'd probably not want to do this for entire polygons over a particular size, you really want the buffers to fit in L1 cache. So likely no more than a few hundred pixels at a time.

You can add in stages for loading color buffer pixels and performing blending, or alpha testing, etc.

Another thing I'd recommend is tiling the screen so that the current depth buffer and color buffer at least fits in L2 cache. Then write back the color buffer for each tile when you're done. You shouldn't need to write back the depth buffer unless your API exposes it, in which case hopefully you can know ahead of time whether or not it's needed. Tiling does require a scene grabber, so if the users expect to be able to render a small number of polygons then immediately read back the results several times during a frame the tiling will make it a lot slower, or difficult to support at all.

If you do have some form of early-Z testing (or even better, hierarchical Z-testing) it may make sense to do a depth-only pre-pass. Especially if alpha test isn't enabled.
I had a lot to do the last days but finally found time to do some tests. First of all: I understand now the idea behind NEON. Like a vector machine. Nice. Secondly: I cannot optimize my whole code to be NEON friendly, but I tried to implement your suggestion a bit. For my test I do this:

  • I calculate all u and v values in 4 parallel lanes (I need to clamp them if they are negative or too big)
  • I load all texel values to a buffer (without NEON)
  • I write the texels to the right position in the polygon. In this last phase I do also alpha test with the texel data.
I didn't implement z set, z test, blending and pattern yet, but my comparing scene doesn't use this either. In fact I want to speed up Hase primarily atm and the rendering of the background doesn't even use alpha test...Right now I have a speed up of 1,5!

This is good and bad at the same time.

First of all it is bad, because I use 4 parallel lanes, so I would expect a speed up of 4 or at least 3 (of course not, but let's troll!). However the texels still need to be loaded serial. Furthermore I added some new buffers for caching the results of the different states, which need time to be written to and read. Last but not least I can't really test for alpha and then just do nothing. I have to load 4 pixels from the framebuffer, testselect them with the texels from the texturebuffer and then write back - even if the whole alphamask was 0.

However 50% more speed is still awesome!

Now I need to clean up my code before I can add the NEON optimization in a big style. I have ~32 functions, which differ only in details (like with or without z test). I need to write one big #ifdef monster (which I already did for the function itself, but not for the pixel set...).

But I think it is worth it! :D Thanks for the hints, Exophase!
 
Back
Top