Double or Float?


It's a mistake I've made recently - I cast between longwords and pointers... both being 4 bytes on a 32bit CPU. Of course, now my software can't (easily) be ported to 64 bit.

Not that I care a great deal, mind :)

D.
 
For speed on modern gpus:

a double takes twice as long as an float.

Means it could calculate 1000 floats or 500 doubles in the same time.

double is mostly used for in scientific research where a small difference can change the whole outcome (like weather prediction). For games most people uses float, which is accurate enough and twice as fast.
 
Last edited by a moderator:
For speed on modern gpus:

a double takes twice as long as an float.

Means it could calculate 1000 floats or 500 doubles in the same time.

double is mostly used for in scientific research where a small difference can change the whole outcome (like weather prediction). For games most people uses float, which is accurate enough and twice as fast.
For most consumer grade GPUs a double is much more than twice as slow as a float. It can be more like four, five, or even twelve or twenty-four times slower. Only on HPC-grade equipment or CPUs is the ratio ever 2:1.

That's for desktop/laptop GPUs. Most mobile GPUs don't support doubles at all.
 
Last edited by a moderator:
Oh, man alive... I've always only used double for floating point operations... I thought it was supposed to be the default.  Thanks for the tip :)
 
Oh, man alive... I've always only used double for floating point operations... I thought it was supposed to be the default.  Thanks for the tip :)
Write something like:

float speed = 10.0f;

If you dont write the f the 10.0 is seen as a double value and it will be converted (casted) into a float during runtime which costs a few cycles (at least if the compiler is not optimizing it away).
 
Last edited by a moderator:
If you dont write the f the 10.0 is seen as a double value and it will be converted (casted) into a float during runtime which costs a few cycles (at least if the compiler is not optimizing it away).
Or use "-fsingle-precision-constant" as compiler option and forget constant ever been double. Hey who's going to write down over 7 digits of precision for a constant...

EDIT: on a side note : http://pandorawiki.org/Floating_Point_Optimization
 
Last edited by a moderator:
can I get a little clarification, is this Pandora specific information, or general coding practice?

I ran this program ->

int main()
{
  for(double x = 1; x <= 100000.0; ++x)
    for(double y = 1; y <= 100000.0; ++y)
      double z = x * y;
}

on my windows machine with mingw32 and -O3.  I've changed the doubles to floats and added fs at the end of the literals, but I'm getting no really difference in processing time other than a random +/- 0.1% variance.

Is my program unrealistically simplistic - and maybe being optimized nicely, or does it actually not matter if you use floats or doubles on a PC?
 
Well, as Letalis Sonus said earlier, x86 FPU are 80bit wide :

The FPUs are often working with entirely different sizes anyway. x86 FPUs usually work with 80 bit wide floats, you can even directly use them as such in C, that's the long double datatype.
Beside, gcc is way more optimized when targeting x86 than ARM. In fact there is a large chance that in both your tests, gcc casted your float/double to long double and unrolled the loop (at least partially) to be able to use the SIMD feature (single instruction multiple data) of the FPU. Hence the similar timing. You'll need to read the .o file produced by gcc to be sure (hint that's over my head :D )
The rule of thumb is : dont use data-type way larger than what you need. So in most case, float should be good enough. And for ARM, it's a must as the GPU doesnt handle double and the performance impact is huge.
 
Last edited by a moderator:
can I get a little clarification, is this Pandora specific information, or general coding practice?

I ran this program ->

int main()

{

  for(double x = 1; x <= 100000.0; ++x)

    for(double y = 1; y <= 100000.0; ++y)

      double z = x * y;

}

on my windows machine with mingw32 and -O3.  I've changed the doubles to floats and added fs at the end of the literals, but I'm getting no really difference in processing time other than a random +/- 0.1% variance.

Is my program unrealistically simplistic - and maybe being optimized nicely, or does it actually not matter if you use floats or doubles on a PC?
you should take a look at the assembler output, i dont think that with O3 that there's much left. and adding 'f's doesnt help, because it does not magically convert your types from double to float :)

and yes, using float (IMO) general coding practice. as said before, if you dont really need the higher precision of double (which is rarely the case), just use float.
 
Last edited by a moderator:
because it does not magically convert your types from double to float
No, but it does prevent the constants from being converted. If left as standard double and compared to floats, that's 200000000 conversions from double to float for the comparison (unless the optimizer is doing it once and shoving it into a register or something). It probably doesn't add up to much, the x86 FPU is more than powerful enough, but on something with just a single precision FPU it's going to be a very expensive operation.For best practice, if you're working with floats make your constants floats as well.
 
because it does not magically convert your types from double to float
No, but it does prevent the constants from being converted. If left as standard double and compared to floats, that's 200000000 conversions from double to float for the comparison (unless the optimizer is doing it once and shoving it into a register or something). It probably doesn't add up to much, the x86 FPU is more than powerful enough, but on something with just a single precision FPU it's going to be a very expensive operation.
For best practice, if you're working with floats make your constants floats as well.
sure but in his case he uses double as loop variables, so just adding the f at the end of the numbers will probably make it even worse.
 
sure but in his case he uses double as loop variables, so just adding the f at the end of the numbers will probably make it even worse
He then said
I've changed the doubles to floats and added fs at the end of the literals
which is what I thought you were complaining about.
 
can I get a little clarification, is this Pandora specific information, or general coding practice?

I ran this program ->

int main()


{


  for(double x = 1; x <= 100000.0; ++x)


    for(double y = 1; y <= 100000.0; ++y)


      double z = x * y;


}

on my windows machine with mingw32 and -O3.  I've changed the doubles to floats and added fs at the end of the literals, but I'm getting no really difference in processing time other than a random +/- 0.1% variance.

Is my program unrealistically simplistic - and maybe being optimized nicely, or does it actually not matter if you use floats or doubles on a PC?
I would say that the compiler is optimising well since the code is very straightforward.

But I would also say that it's good practice to use the best datatype to begin with just to give the compiler a helping hand.
 
Yeah. I use much 16 bit color values for my software renderer. It gave me a great speed up to use 32 bit ints (ignoring the upper 16 bit) internal instead of 16 bit ints because the compiler did some conversion stuff for every pixel (!), which wasn't needed at all, because it could just ignore the upper 16 bit. Instead if used cpu time to set the upper 16 bit to zero. ;)
 
Last edited by a moderator:
can I get a little clarification, is this Pandora specific information, or general coding practice?

I ran this program ->

int main()


{


  for(double x = 1; x <= 100000.0; ++x)


    for(double y = 1; y <= 100000.0; ++y)


      double z = x * y;


}

on my windows machine with mingw32 and -O3.  I've changed the doubles to floats and added fs at the end of the literals, but I'm getting no really difference in processing time other than a random +/- 0.1% variance.

Is my program unrealistically simplistic - and maybe being optimized nicely, or does it actually not matter if you use floats or doubles on a PC?
Your biggest problem is that your program isn't actually doing anything, as far as the compiler is concerned. You're doing a bunch of calculations but you're not doing anything with the results, you're more or less sitting in the corner mumbling to yourself while the compiler ignores you. The entire thing is dead code and I doubt there's a single floating point operation emitted at -O3. What you're actually measuring as processing time is just what it takes to startup and exit the program. Look at what this time actually is, and calculate how many floating point operations per second it's supposed to be and you'll probably find it's some impossibly fast value.


Even if you changed it to something that actually did anything with the computations, like accumulated them somewhere and returned the result, it'd still be compiled down to next to nothing because your inputs are all constants. The compiler will realize this and reduce your entire huge loop into setting a constant value. It could be fun timing how much longer this takes to compile...

So when doing benchmkars like this you want to make sure the compiler doesn't think your inputs are constant (get them from argv for instance) and that your outputs are unused (use them in a printf or something).

Well, as Letalis Sonus said earlier, x86 FPU are 80bit wide :
SSE isn't and there's no good reason why programs should be compiled w/o SSE. Even if it's not vectorizing SSE still has scalar floating point. So they should only be using the classic x87 FPU if they use long doubles.
 
In hindsight, I would have preferred some of the basic types to have actual sizes. My first full time development platform was PS2, where sizes were:

word/int 32 bit

dword/long 64 bit

qword/long long 128 bit

I got so used to it, when I was forced to do some WIN32 programming (using the WIN32 API), I had to remember DWORD was now 32 bit. Is it not also the case that int is 32 bit and long is also 32 bit? Maybe I'm wrong on this (can't be bothered to check right now) but it certainly all felt a little clunky.

I do get the general idea behind these basic types, and for some pieces of code they are fine to use, but I seem to find (increasingly) using a data type of an unknown size (across all platforms we target) often ends up causes problems 'Oh that number will never overflow' being a famous assertion that ends up being wrong :)
 
I do get the general idea behind these basic types, and for some pieces of code they are fine to use, but I seem to find (increasingly) using a data type of an unknown size (across all platforms we target) often ends up causes problems 'Oh that number will never overflow' being a famous assertion that ends up being wrong :)
C99 introduced types like int32_t you can use.
 
Furthmore SDL implements some types, too.

You can even use them without linking against SDL, because in fact they are just some #ifdefs and #defines in the header.

I like Uint16, Sint32 or Uint64 more than int32_t and you don't need C99. ;)

(Apart from that many pandora games / applications use SDL anyway)
 
Last edited by a moderator:
Back
Top