GP2X Floating Point Vs. Fixed Point Benchmarks?


FluffyPanda

I have 6 little blue things. W00t.
Joined
Nov 25, 2005
Messages
568
Age
44
Location
Italy
Website
www.fluffypanda.com
I've just been reading up on floating point maths and I was wondering how necessary it was with GCC 4.0.2.

It seems simple enough to get a few macros to make it all easy, but if the compiler will fix it for me then, frankly, why bother?

So if anyone has already bothered and could let me know what the impact is on, say,10,000 divides, multiplications and additions It'd be much appreciated.
 
FluffyPanda posted on Feb 25 2006 at 02:24 AM said:
I've just been reading up on floating point maths and I was wondering how necessary it was with GCC 4.0.2.

It seems simple enough to get a few macros to make it all easy, but if the compiler will fix it for me then, frankly, why bother?

So if anyone has already bothered and could let me know what the impact is on, say,10,000 divides, multiplications and additions It'd be much appreciated.

I didn't profile floatpoint on the GP2X as it's best to avoid it any cost. There no FPU so it will be inherently slow. Even worse is a divide - the Arm doesn't support it directly and it takes via routines tens of machine cycles just for an intengers. At it does not have a fixed cycles cost so it's sometimes might be just 20 cycles, sometimes even as much as 100!

The good news is you can do floatpoints using plain intengers just fine. Only problem is that the range must be handled manually. This isn't so hard really and it's fast.
 
Last edited by a moderator:
I posted some profile results from Allegro about a week ago, including comparisons between its fixed point routines and the native floating point versions. The difference for simple operations (multiplication, addition, etc) wasn't as great as I expected, but doing trig via lookup tables was definitely a big win. So I guess gcc's soft-float implementation is pretty good.

Allegro's profiling probably isn't all that accurate, though, as I'm probably the first person to seriously look at the maths function profiling results for about five years!
 
How are you guys doing multiplications/divides? Every tutorial I've seen either uses 64 bit long longs (which lock up my GP2X, so I assume the processor doesn't support it) and others totally forget that a 64x32 multiplication must be done for 16.16 bit.
 
Allegro has various implementations - on x86 platforms it generally uses dedicated assembler routines (which use 64-bit multiplication), but if those are disabled and/or it's on a different architecture it either uses long longs, or converts to float to do the multiplication.

I'm not sure which is used on the gp2x, probably the long long one, but it's possible that it's using the float one - it depends how the defines got set in the end. If it's doing the float cast, that would explain why I didn't see much performance difference between float and fixed multiplication...
 
I hope I don't have to do a float cast, that would kinda defeat the point of it all.
 
Yeah, the float cast is just because modern CPUs with FPUs don't benefit from doing fixed point maths.

I'm pretty sure Allegro is using the long long system on the gp2x, so maybe you should look more closely into why that's not working for you.
 
The problem is, I can't even make a long long, assign a value to it and then fwrite it. I keep getting a black screen till I turn the unit off.
 
TKF15H posted on Feb 28 2006 at 03:24 PM said:
How are you guys doing multiplications/divides? Every tutorial I've seen either uses 64 bit long longs (which lock up my GP2X, so I assume the processor doesn't support it) and others totally forget that a 64x32 multiplication must be done for 16.16 bit.
Like this fixed.cpp.

- HM
 
Last edited by a moderator:
Thanks for the link, hmw. Turns out they're using long longs too.
For some reason, long longs started working today, but I have no idea what changed... :p

Here's the code I made, if it's of use to anybody, feel free to use it:
Code:
// #define Debug

typedef unsigned char *u8p;
typedef unsigned char u8;
typedef char *s8p;
typedef char s8;
typedef unsigned short int *u16p;
typedef unsigned short int u16;
typedef short int *s16p;
typedef short int s16;
typedef unsigned int *u32p;
typedef unsigned int u32;
typedef int *s32p;
typedef int s32;
typedef float f32;
typedef f32 *f32p;
typedef double f64;
typedef double *f64p;

#ifdef __GNUC__
#define int64 long long
#else /* MSVC */
#define int64 __int64
#pragma warning(disable: 4996)
#endif

typedef signed int64 s64;
typedef unsigned int64 u64;
typedef s64 *s64p;
typedef u64 *u64p;
typedef f64 *f64p;

typedef void (* call_v_v )();
typedef void (* call_v_uS )( u16 );  // void, unsigned Short
typedef void (* call_v_LLB )( u32, u32, u8 );   // void, Long, Long, Byte
typedef u32  (* call_L_L )( u32 );   // Long, Long
typedef u16  (* call_S_L )( u32 );   // Short, Long
typedef u8   (* call_B_L )( u32 );   // Byte, Long

union _u16{
	struct{
  u8 L, U;
	};
	u16 X;
};

union _u32{
	struct{
  _u16 L, U;
	};
	_u32 & operator=( const u32 nv ){
  X = nv;
  return *this;
	}
	u32 X;
	u8 bit( u8 x ){ return (X>>x)|1; }
};

union _u64{
	struct{
  _u32 L, U;
	};
	u64 X;
	u32 gU(){ return *(u32p)&U; }
	u32 gL(){ return *(u32p)&L; }
	u8 bit( u8 x ){
  if( x>=32 ) return U.bit( x-32 );
  return L.bit( x );
	}
};

struct _u128{
	_u64 L, U;
	u8 bit( u8 x ){
  if( x>=64 ) return U.bit( x-64 );
  return L.bit( x );
	}
};

typedef u8  *_u8p;
typedef _u16 *_u16p;
typedef _u32 *_u32p;
typedef _u64 *_u64p;
typedef _u128 *_u128p;

struct F32{
  u32 X;
  F32(){}
  F32( int x ){
    (*(_u32p)&X).U.X = x;
    (*(_u32p)&X).L.X = 0;
  }
  F32( float x ){
    x *= 65536.0f;
    X = (u32) x;
  }
  F32 operator++( void ){
    (*(_u32p)&X).U.X++;
    return *this;
  }
  F32 operator++( int ){
    (*(_u32p)&X).U.X++;
    return *this;
  }
  F32 operator+=( int x ){
    (*(_u32p)&X).U.X+=x;
    return *this;
  }
  F32 operator+=( float x ){
    X+=(u32)(x*65536.0f);
    return *this;
  }
  F32 operator+=( F32 x ){
    X+=x.X;
    return *this;
  }
  F32 operator--( void ){
    (*(_u32p)&X).U.X--;
    return *this;
  }
  F32 operator--( int ){
    (*(_u32p)&X).U.X--;
    return *this;
  }
  F32 operator-=( int x){
    (*(_u32p)&X).U.X-=x;
    return *this;
  }
  F32 operator=( int x ){
    (*(_u32p)&X).U.X = x;
    (*(_u32p)&X).L.X = 0;
    return *this;
  }
  F32 operator=( F32 x ){
    X = x.X;
    return *this;
  }
  F32 operator=( f32 x ){
    x *= 65536.0f;
    X = (u32) x;
    return *this;
  }
  operator int(){
    return (int) ((s16)(*(_u32p)&X).U.X);
  }
  operator unsigned int(){
    return (*(_u32p)&X).U.X;
  }
  operator float(){
    return ((float)s32(X))/65536.0f;
  }
  F32 Mul( F32 x ){
    u64 T;
    T = (u64)X * (u64)x.X;
    T >>= 16;
    X = (u32) T;
    return *this;
  }

  F32 operator+( int x ){
    F32 R;
#ifdef Debug
    f32 ev;
    ev = f32(s32(X)) / 65536.0f;
    ev += x;
#endif
    R.X = 0;
    (*(_u32p)&R.X).U.X = (u16) x;
    R.X += X;
#ifdef Debug
    if( s32(ev*10) != s32(f32(s32(R.X))/65536.0f*10.0f) )
      printf("%f != %f!\n", ev, f32(s32(R.X))/65536.0f );
#endif
    return R;
  }
  F32 operator+( F32 x ){
    F32 R;
#ifdef Debug
    f32 ev;
    ev = f32(s32(X)) / 65536.0f;
    ev += f32(s32(x.X)) / 65536.0f;
#endif
    R.X = X + x.X;
#ifdef Debug
    if( s32(ev*10) != s32(f32(s32(R.X))/65536.0f*10.0f) )
      printf("%f != %f!\n", ev, f32(s32(R.X))/65536.0f );
#endif
    return R;
  }
  F32 operator-( int x ){
    F32 R;
#ifdef Debug
    f32 ev;
    ev = f32(s32(X)) / 65536.0f;
    ev -= x;
#endif
    R.X = (*(_u32p)&X).U.X-x;
#ifdef Debug
    if( s32(ev*10) != s32(f32(s32(R.X))/65536.0f*10.0f) )
      printf("%f != %f!\n", ev, f32(s32(R.X))/65536.0f );
#endif
    return R;
  }
  F32 operator-( F32 x ){
    F32 R;
    R.X = X - x.X;
    return *this;
  }
  F32 operator*( int x ){
    F32 R;
#ifdef Debug
    f32 ev;
    ev = f32(s32(X)) / 65536.0f;
    ev *= x;
#endif
    R.X = X * x;
#ifdef Debug
    if( s32(ev*10) != s32(f32(s32(R.X))/65536.0f*10.0f) )
      printf("%f != %f!\n", ev, f32(s32(R.X))/65536.0f );
#endif
    return R;
  }
  F32 operator*( f32 x ){
    F32 R;
    s64 T;
#ifdef Debug
    f32 ev;
    ev = f32(s32(X)) / 65536.0f;
    ev *= x;
#endif
    T = s64(s32(X));
    T *= s64(s32(x*65536.0f));
    T >>= 16;
    R.X = (u32) T;
#ifdef Debug
    if( (*(_u64p)&T).U.L.X && !(*(_u64p)&T).U.U.X ) R.X = 0x7FFFFFFF;
    if( s32(ev*10) != s32(f32(s32(R.X))/65536.0f*10.0f) )
      printf("%f != %f!\n", ev, f32(s32(R.X))/65536.0f );
#endif
    return R;
  }
  F32 operator*( F32 x ){
    F32 R;
    s64 T;
#ifdef Debug
    f32 ev;
    ev = f32(s32(X)) / 65536.0f;
    ev *= f32(s32(x.X)) / 65536.0f;
#endif
    // Something here...
    T = s64(s32(X));
    T *=s64(s32(x.X));
    T >>= 16;
    R.X = (u32) T;
#ifdef Debug
    if( s32(ev*10) != s32(f32(s32(R.X))/65536.0f*10.0f) )
      printf("%f != %f!\n", ev, f32(s32(R.X))/65536.0f );
#endif
    return R;
  }
  F32 operator*=( int x ){
    X *= x;
    return *this;
  }
  F32 operator*=( f32 x ){
    s64 T;
#ifdef Debug
    f32 ev;
    ev = f32(s32(X)) / 65536.0f;
    ev *= x;
#endif

    T = s64(s32(X));
    T *= s64(s32(x*65536.0f));
    T >>= 16;
    X = (u32) T;
#ifdef Debug
    if( s32(ev*10) != s32(f32(s32(X))/65536.0f)*10.0f )
      printf("%f != %f!\n", ev, f32(s32(X))/65536.0f );
#endif
    return *this;
  }
  F32 operator*=( F32 x ){
    s64 T;
#ifdef Debug
    f32 ev;
    ev = f32(s32(X)) / 65536.0f;
    ev *= f32(s32(x.X)) / 65536.0f;
#endif
    T = s64(s32(X));
    T *= s64(s32(x.X));
    T >>= 16;
    X = (u32) T;

#ifdef Debug
    if( s32(ev*10) != s32(f32(s32(X))/65536.0f)*10.0f )
      printf("%f != %f!\n", ev, f32(s32(X))/65536.0f );
#endif
/*
    X *= x.X;
    X >>= 16;
*/
    return *this;
  }
  F32 operator/( int x ){
    F32 R;
    R.X = X / x;
    return R;
  }
  F32 operator/( F32 x ){
    F32 R;
    s64 T;
#ifdef Debug
    f32 ev;
    ev = f32(s32(X)) / 65536.0f;
    ev /= f32(s32(x.X)) / 65536.0f;
#endif
    T = 0;
    (*(_u64p)&T).U.X = s16((*(_u32p)&X).U.X);
    (*(_u32p)&((*(_u64p)&T).L.X)).U.X = (*(_u32p)&X).L.X;
    R.X = (u32)( T / s32(x.X) );
#ifdef Debug
    if( s32(ev*10) != s32(f32(s32(R.X))/65536.0f*10.0f) )
      printf("%f != %f!\n", ev, f32(s32(R.X))/65536.0f );
#endif
//    R.X = (X<<16)/x.X;
    return R;
  }
  F32 operator/=( F32 x ){
    s64 T;
#ifdef Debug
    f32 ev;
    ev = f32(s32(X)) / 65536.0f;
    ev /= f32(s32(x.X)) / 65536.0f;
#endif
    T = 0;
    (*(_u64p)&T).U.X = (*(_u32p)&X).U.X;
    (*(_u32p)&((*(_u64p)&T).L.X)).U.X = (*(_u32p)&X).L.X;
    X = (u32)( T / x.X );
#ifdef Debug
    if( s32(ev*10) != s32(f32(s32(X))/65536.0f)*10.0f )
      printf("%f != %f!\n", ev, f32(s32(X))/65536.0f );
#endif
    return *this;
  }
};

example:
Code:
// f32 is a regular float, F32 is fixed-point.
int main(int argc, char *argv[])
{
 gp2x_init(1000, 15, 11025,16,1,60);
 f32 fl=100.0f;
 F32 fi, t;
 u32 x=0;
 fi=100.0f;
 t =0.9f; 
 FILE *out;
 out = fopen("FloatTest.txt", "w");
 for( u32 timer=gp2x_timer_read(); timer + 10000 > gp2x_timer_read(); x++ ){
   fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f;
   fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f; fl *= 0.9f;
 }
 fprintf( out, "%d float multiplications per second.\n", x/2 );
 x=0;
 for( u32 timer=gp2x_timer_read(); timer + 10000 > gp2x_timer_read(); x++ ){
   fi *= t; fi *= t; fi *= t; fi *= t; fi *= t; fi *= t; fi *= t; fi *= t; fi *= t; fi *= t;
   fi *= t; fi *= t; fi *= t; fi *= t; fi *= t; fi *= t; fi *= t; fi *= t; fi *= t; fi *= t;
 }
 fprintf( out, "%d 16.16 multiplications per second.\n", x/2 );
 fclose( out );
 gp2x_deinit();
}

void gp2x_sound_frame(void *blah, void *buff, int samples) {}

Results:
236983 float multiplications per second.
530667 16.16 multiplications per second.
2.2392 times faster than floats.

note: The code uses unions rather than Bitshifts wherever possible. This should make it faster, but I haven't looked at the assembly code to be sure.
 
TKF15H posted on Mar 1 2006 at 07:25 AM said:
Thanks for the link, hmw. Turns out they're using long longs too.
For some reason, long longs started working today, but I have no idea what changed... :p

note: The code uses unions rather than Bitshifts wherever possible. This should make it faster, but I haven't looked at the assembly code to be sure.
You probably need to run some benchmarks. The 16.16 multiply as usually implemented using a 64-bit intermediate is 3 instructions (smull, shift, or/shift) if compiled properly.

- HM
 
Last edited by a moderator:
benchmarked, check previous post.
Same test with the -O2 flag:
5283459 float multiplications per second.
5283574 16.16 multiplications per second.

Looks like GCC's implementation of floats isn't that bad, or my code is not being optimised well.
 
TKF15H posted on Mar 1 2006 at 05:38 PM said:
benchmarked, check previous post.
Same test with the -O2 flag:
5283459 float multiplications per second.
5283574 16.16 multiplications per second.

Looks like GCC's implementation of floats isn't that bad, or my code is not being optimised well.

Well with -O2, the compiler could optimize away all multiplications, since the results are never used (neither fl nor fi).

Doing a printf of the result variables would fix it if that were the case.

But that would leave empty loop bodies, and I find it odd that empty loop bodies wouldn't be much faster than loop bodies with fixed point multiplications; unless the cost of fixed point muls is dwarfed by the gp2x_timer_read call's you're doing.

And I think the timer reading is contaminating your tests.

The usual way to do a benchmark and arrive at X operations per second is to perform some fixed large number of operations (100000 or what have you), time how long that takes, then divide. This method, you're incurring an unnecessary addition and function call each iteration, not to mention the time it takes to actually "read the time".
 
Last edited by a moderator:
True, reading the time takes time, but it takes the same amount of time for both tests. Besides, I thought 20 multiplications would be enough to avoid the dwarfing.
 
I decided to have a go at it myself.

The following code (using the same fixed point library as above, but SDL instead of minilib):

Code:
#include "ff.cpp"
#include "SDL.h"

#define REPETITIONS 10000000

#define BEGIN  begin = SDL_GetTicks(); for (int i = REPETITIONS; i; i--) {
#define END(x) } end = SDL_GetTicks(); printf("%-35s: %.2f operations per second\n", x, (REPETITIONS * 1000) / float(end - begin));

int main(int argc, char *argv[])
{
    SDL_Init(SDL_INIT_TIMER);
    
    Uint32 begin, end;
    
    // Time floats
    f32 flt = 0.1f;
    BEGIN
      flt *= 1.1f;
    END("floating point muls")
    
    // Time fixpoint with ints
    F32 ff; ff = 0.1f;
    BEGIN
      ff *= 2;
    END("fixed point with int muls")
    
    // Time fixpoints with fixpoints
    ff = 0.1f;
    F32 mul; mul = 1.1f;
    BEGIN
      ff *= mul;
    END("fixed point with fixed point muls");
    
    printf("\n\nGarbage: %f %f\n", flt, float(ff));
    
    return 0;
}

Produces the following results:

Without optimization:

Code:
floating point muls                : 514059.56 operations per second
fixed point with int muls          : 514247.06 operations per second
fixed point with fixed point muls  : 280498.38 operations per second


Garbage: inf 18451.533203

With optimization (-fexpensive-optimization -O3):

Code:
floating point muls                : 507035.38 operations per second
fixed point with int muls          : 6980522.00 operations per second
fixed point with fixed point muls  : 2136462.75 operations per second


Garbage: inf 18451.533203

So this seems to be the verdict:
  • Without optimization, fixpoint math is at its best as fast as float math.
  • With a fair amount of optimization, fixpoint math is at least 4 times as fast as float math.
Disclaimer: these tests are not exhaustive. Addition, subtraction and divisions are not tested. Not all data types are tried (double, single, different types of ints). No distinction is made between constant and variable multiplications. And I can't figure out why the float reaches infinity and the fixed point value doesn't, meaning I could have a bug in my code. Depend on these results at your own risk.
 
RiX0R posted on Mar 1 2006 at 10:10 PM said:
[*] With a fair amount of optimization, fixpoint math is at least 4 times as fast as float math.
[/list]

The MUL opcode on the Arm 920T can take from 1 to 7 machine cycles (as stated here. If the one of arguments is constant some tricks with shifting/adding could be implemented (but I doubt if it is worth the effort to replace simple MUL).

Most of other ALU opcodes takes just 1 machine cycle except when with shifted operand is used (add 1 cycle for that).

The perfomance will be strongly affected by the load/store of course too (ldr/str are around 50MB/s of bandwith). Assume 7 cycles per MUL with loading two source operands and storing result to a memory. It is:
(200/7.0)*(3*4) = 342.85MB/s

So it's clear that the perfomance might be limited by load/store bandwith.

So assumption that floatpoints are as fast as intengers isn't true technically thought might be practically when the memory must be accessed.
 
Last edited by a moderator:
RiX0R posted on Mar 1 2006 at 08:10 PM said:
And I can't figure out why the float reaches infinity and the fixed point value doesn't, meaning I could have a bug in my code. Depend on these results at your own risk.

The float is going to reach infinity because it is overflowing. I have not tested to see what FLT_MAX is on the ARM architecture of the GP2x, but on my PC it is 3.40282e38. That would mean that it would reach infinity at the 956th loop. Meanwhile, I'm guessing that the implementation of fixed point you're using here is not checking for overflows, or is outright ignoring them and continuing on its merry way.

I am no expert on exactly how these low-level functions are implemented, but I would think that it would have a quick check to see if either of the operands are infinity before actually doing the multiplication. This means that any execution of the loop beyond the overflow point would take significantly less time, and the fixed point implementation is always performing the full multiplication. Thus, I would not trust these numbers just yet, as the floating point operation timings are not accurate; you currently have the time to do ~955 multiplications and ~9999045 overflow checks, while fixed point is doing a full 10000000 multiplications.

granted, this is assuming that floating point multiplication checks for infinity both before AND after actually doing the multiplication, an assumption I honestly do not know if I can safely make or not, but if I'm right, your numbers could be significantly off, in favor of floating point.

Anyone ever taken the time to crawl through ARM implementations of arithmetic functions able to comment on when the overflow checks happen, would be much appreciated. I'll try doing some alternate benchmarking next chance I get, but that might not be until the weekend.
 
Last edited by a moderator:
I just updated the F32 class (there were some bad casts in there and some multiplications had no 64bit intermediate) and now it works well with MSVC, but something is apparently off in GCC.
 
BattleCattle posted on Mar 1 2006 at 11:40 PM said:
RiX0R posted on Mar 1 2006 at 08:10 PM said:
And I can't figure out why the float reaches infinity and the fixed point value doesn't, meaning I could have a bug in my code. Depend on these results at your own risk.

The float is going to reach infinity because it is overflowing. I have not tested to see what FLT_MAX is on the ARM architecture of the GP2x, but on my PC it is 3.40282e38. That would mean that it would reach infinity at the 956th loop. Meanwhile, I'm guessing that the implementation of fixed point you're using here is not checking for overflows, or is outright ignoring them and continuing on its merry way.

I am no expert on exactly how these low-level functions are implemented, but I would think that it would have a quick check to see if either of the operands are infinity before actually doing the multiplication. This means that any execution of the loop beyond the overflow point would take significantly less time, and the fixed point implementation is always performing the full multiplication. Thus, I would not trust these numbers just yet, as the floating point operation timings are not accurate; you currently have the time to do ~955 multiplications and ~9999045 overflow checks, while fixed point is doing a full 10000000 multiplications.

granted, this is assuming that floating point multiplication checks for infinity both before AND after actually doing the multiplication, an assumption I honestly do not know if I can safely make or not, but if I'm right, your numbers could be significantly off, in favor of floating point.

Anyone ever taken the time to crawl through ARM implementations of arithmetic functions able to comment on when the overflow checks happen, would be much appreciated. I'll try doing some alternate benchmarking next chance I get, but that might not be until the weekend.

quite right. i don't know the details of the architecture but i DO know that if you're hitting infinity with floats then the hardware isn't doing much work after that point. and hitting infinity with floating-point numbers is common on any platform.
 
Last edited by a moderator:
Back
Top