For most consumer grade GPUs a double is much more than twice as slow as a float. It can be more like four, five, or even twelve or twenty-four times slower. Only on HPC-grade equipment or CPUs is the ratio ever 2:1.For speed on modern gpus:
a double takes twice as long as an float.
Means it could calculate 1000 floats or 500 doubles in the same time.
double is mostly used for in scientific research where a small difference can change the whole outcome (like weather prediction). For games most people uses float, which is accurate enough and twice as fast.
Write something like:Oh, man alive... I've always only used double for floating point operations... I thought it was supposed to be the default. Thanks for the tip
Or use "-fsingle-precision-constant" as compiler option and forget constant ever been double. Hey who's going to write down over 7 digits of precision for a constant...If you dont write the f the 10.0 is seen as a double value and it will be converted (casted) into a float during runtime which costs a few cycles (at least if the compiler is not optimizing it away).
Beside, gcc is way more optimized when targeting x86 than ARM. In fact there is a large chance that in both your tests, gcc casted your float/double to long double and unrolled the loop (at least partially) to be able to use the SIMD feature (single instruction multiple data) of the FPU. Hence the similar timing. You'll need to read the .o file produced by gcc to be sure (hint that's over my head )The FPUs are often working with entirely different sizes anyway. x86 FPUs usually work with 80 bit wide floats, you can even directly use them as such in C, that's the long double datatype.
you should take a look at the assembler output, i dont think that with O3 that there's much left. and adding 'f's doesnt help, because it does not magically convert your types from double to floatcan I get a little clarification, is this Pandora specific information, or general coding practice?
I ran this program ->
int main()
{
for(double x = 1; x <= 100000.0; ++x)
for(double y = 1; y <= 100000.0; ++y)
double z = x * y;
}
on my windows machine with mingw32 and -O3. I've changed the doubles to floats and added fs at the end of the literals, but I'm getting no really difference in processing time other than a random +/- 0.1% variance.
Is my program unrealistically simplistic - and maybe being optimized nicely, or does it actually not matter if you use floats or doubles on a PC?
No, but it does prevent the constants from being converted. If left as standard double and compared to floats, that's 200000000 conversions from double to float for the comparison (unless the optimizer is doing it once and shoving it into a register or something). It probably doesn't add up to much, the x86 FPU is more than powerful enough, but on something with just a single precision FPU it's going to be a very expensive operation.For best practice, if you're working with floats make your constants floats as well.because it does not magically convert your types from double to float
sure but in his case he uses double as loop variables, so just adding the f at the end of the numbers will probably make it even worse.No, but it does prevent the constants from being converted. If left as standard double and compared to floats, that's 200000000 conversions from double to float for the comparison (unless the optimizer is doing it once and shoving it into a register or something). It probably doesn't add up to much, the x86 FPU is more than powerful enough, but on something with just a single precision FPU it's going to be a very expensive operation.because it does not magically convert your types from double to float
For best practice, if you're working with floats make your constants floats as well.
He then saidsure but in his case he uses double as loop variables, so just adding the f at the end of the numbers will probably make it even worse
which is what I thought you were complaining about.I've changed the doubles to floats and added fs at the end of the literals
I would say that the compiler is optimising well since the code is very straightforward.can I get a little clarification, is this Pandora specific information, or general coding practice?
I ran this program ->
int main()
{
for(double x = 1; x <= 100000.0; ++x)
for(double y = 1; y <= 100000.0; ++y)
double z = x * y;
}
on my windows machine with mingw32 and -O3. I've changed the doubles to floats and added fs at the end of the literals, but I'm getting no really difference in processing time other than a random +/- 0.1% variance.
Is my program unrealistically simplistic - and maybe being optimized nicely, or does it actually not matter if you use floats or doubles on a PC?
aah overread this, everything ok thenHe then saidsure but in his case he uses double as loop variables, so just adding the f at the end of the numbers will probably make it even worsewhich is what I thought you were complaining about.I've changed the doubles to floats and added fs at the end of the literals
Your biggest problem is that your program isn't actually doing anything, as far as the compiler is concerned. You're doing a bunch of calculations but you're not doing anything with the results, you're more or less sitting in the corner mumbling to yourself while the compiler ignores you. The entire thing is dead code and I doubt there's a single floating point operation emitted at -O3. What you're actually measuring as processing time is just what it takes to startup and exit the program. Look at what this time actually is, and calculate how many floating point operations per second it's supposed to be and you'll probably find it's some impossibly fast value.can I get a little clarification, is this Pandora specific information, or general coding practice?
I ran this program ->
int main()
{
for(double x = 1; x <= 100000.0; ++x)
for(double y = 1; y <= 100000.0; ++y)
double z = x * y;
}
on my windows machine with mingw32 and -O3. I've changed the doubles to floats and added fs at the end of the literals, but I'm getting no really difference in processing time other than a random +/- 0.1% variance.
Is my program unrealistically simplistic - and maybe being optimized nicely, or does it actually not matter if you use floats or doubles on a PC?
SSE isn't and there's no good reason why programs should be compiled w/o SSE. Even if it's not vectorizing SSE still has scalar floating point. So they should only be using the classic x87 FPU if they use long doubles.Well, as Letalis Sonus said earlier, x86 FPU are 80bit wide :
C99 introduced types like int32_t you can use.I do get the general idea behind these basic types, and for some pieces of code they are fine to use, but I seem to find (increasingly) using a data type of an unknown size (across all platforms we target) often ends up causes problems 'Oh that number will never overflow' being a famous assertion that ends up being wrong