Neon-ized Math.h Like Library


Adventus

GP Mania
Joined
Oct 1, 2007
Messages
487
Age
36
Location
Canberra, Australia
I'm off uni for a few weeks, so i thought i might look into beginning the development of a math library similar to the standard c library utilising the NEON coprocessor. Anyway heres some questions / thoughts:

1. Has anyone done anything similar for NEON and are willing to share?
2. Is it worthwhile vectorizing? Apart from being able to do ADD and MUL in parallel, The NEON unit appears to effectively be a scalar processor.... this has a large impact on how i evaluate the polynomials.
3. Are there any references for writing efficient algorithms for implementing trig / pow / log functions? I guess the GCC sourcecode might be a good place to start. At the moment i've just written some Taylor expansions in C with minimax coefficients.
4. Is inline asm the way to go?
 
1. I've used NEON but only integer. It shouldn't be very different for floating point though.
2. You're mistaken about NEON, it's 2-way floating point SIMD with 4-way registers/dispatch. So you can do two adds, two muls, etc in parallel, and the instructions dispatch 4 in parallel (so if you use the 4-way versions you'll have a two cycle issue rate). I would suggest using the 4-way instructions if possible because then the code can be ported to an ARM core with 4-way NEON (like Qualcomm's Snapdragon or the upcoming Cortex-A9's) w/o modification and get full benefit, and it takes less instruction space.
3. Usually these functions will involve look up tables, with the size of the LUT being a function of how much precision and speed you want. As you said, looking at libc (not GCC per se) wouldn't be a bad place to start.
4. It depends on what you want to do. If you're writing whole functions in ASM then you're better off avoiding it. If you want to be able to inline ASM inside functions then you might get better efficiency this way. If you're relying on functions that only take a few cycles then the function call overhead will hurt, but you should try having functions that do more aggregate operations anyway, since you'll probably be able to hand optimize optimize the NEON code better.
 
That's the second time I read this claim that Cortex-A9 has a 4-way NEON. Where did you get that from? It's simply not true.
Last time it came up you said you couldn't say, so really this was all an elaborate trick to get you to admit it ;D

From the A9 whitepaper:

"The MPE extends the Cortex-A9 processor’s
floating-point unit (FPU) to provide a quad-MAC
and additional 64-bit and 128-bit register set
supporting a rich set of SIMD operations over 8,
16 and 32-bit integer and 32bit Floating-Point
data quantities every cycle."

Quad-MAC certainly sounds 4-way.
 
Last edited by a moderator:
Hmm had a quick look in glibc and most the math functions call another function like: __ieee754_sinh(x).... i haven't been able to locate the c files for these functions.

Anyway i wrote a sinf algorithm:

CODE
const float __sinf_range = M_PI / 2.0;
const float __sinf_invrange = 2.0 / M_PI;

const float __sinf_lut[4] = {
-0.00018542f,
+0.00831430f,
-0.16666000f,
+1.00000000f,
};

float neon_sinf(const float x){

float ax = fabsf(x);

//Range Reduction:
int k = (int) (ax * __sinf_invrange);
ax = ax - (((float)k) * __sinf_range);
if (k & 1) ax = M_PI_2 - ax;

//Taylor Polynomial (Horner)
float xx = ax * ax;
float r = __sinf_lut[0];
r = r * xx + __sinf_lut[1];
r = r * xx + __sinf_lut[2];
r = r * xx + __sinf_lut[3];
r = r * ax;

//Test Quadrant
if (((k & 2) > 0) ^ (x < 0)) r = -r;

#if DEBUG
printf("k = %i \t r = %f \t sin(x) = %f \t dr = %f \n", k, r, sinf(x), r - sinf(x));
#endif

return r;
}


I get a maximum error relative to the inbuilt sinf around 0.000006. I presume this can be optimized a bit more (and i'll probably find some better coefficients some where). Since the NEON unit is vectorisable i'll use Estrin's method instead of Horner for the polynomial evaluation. Should / can i do the range reduction in the NEON integer pipelines, presumably i want to minimize NEON -> ARM transfers ?

QUOTE
Last time it came up you said you couldn't say, so really this was all an elaborate trick to get you to admit it ;D
As was my question number 2. :) Good to hear about the dual MUL + ADD pipelines, i had reread some of documentation and confused myself.

QUOTE
QUOTE
Wouldn't NEON compiler intrinsics render such a set of functions redundant?
I think that Adventus is aiming for something more/higher level than just accessing the basic functionality of the NEON instructions. Correct. I'm implementing the higher level math functions that are not implemented in hardware (so pow / sin / cos / exp / log / etc). Luckily there's already a hardware SQRTF function.
 
Should / can i do the range reduction in the NEON integer pipelines, presumably i want to minimize NEON -> ARM transfers ?
These transfers absolutely must be avoided whereever possible. In fact, the overhead is so bad that you might be negating a lot of performance just be returning the NEON computed values in an ARM register :/
 
Last edited by a moderator:
Does anyone know how does the use of Neon compare to other (more mainstream) consoles? On the PS2 for example, there's a co-processor (VU0) that can be accessed from the main CPU, and my experience is if *ALL* floating maths based vector/matrix functions are performed on the co-processor, using inlined assembly, there is quite a gain to be had. One of the reasons being is it is able to keep the values in the co-processors registers. Even at worse case, when moving data to/from co-processor/main CPU regularly it isn't really a massive problem as there is a direct link between the two. However, other platforms, such as PS3/360, require values to be written/read in to memory a lot of the time to get them between the vectorised registers and the normal general CPU ones, and when the CPU is clocked hugely higher than the memory, it ends up being completely pointless transfering data to the vectorised registers unless you can guarantee it is going to stay there for a good number of calculations before being transferred back. Just writing general game code that does a bunch of vector maths operations just about always ends up running slower when using the vectorised registers, and quite considerable work has to be put in to write code that actually gets a benefit.

Steve
 
As the name would suggest, the vu0 is probably a vector unit.
Seeing how much matrix and vector multiplications are needed in modern games I would just assume that it is more clever to use the vector unit.
But the neon seems to be a bit more general purpose and therefore it might be more wisely to carefully decide what you want to use it for.

However: I didn't look into this yet so you better not listen to me :p
 
Because of how the pipeline works, transferring data from the CPU registers to the NEON registers is fast but going in the opposite direction has a huge penalty. For that reason it's best to run NEON ops on large batches of data where it can stream the results directly back to memory. That might have some bearing on what kinds of functions Adventus ends up doing.
 
This feels wrong. When working with NEON-instructions you want operate on a stream of data. So you might want calculate the sine of a vector of floats.

CODE
float32x2_t neon_sin_f32( float32x2_t x );


I see two problems with you current sine function:

1) The look up table cannot be loaded onto the NEON.
2) It uses functions like fabsf(); which are executed on the CPU not the NEON.
 
I see two problems with you current sine function:

1) The look up table cannot be loaded onto the NEON.
2) It uses functions like fabsf(); which are executed on the CPU not the NEON.

1) why ? NEON can load a vector of 4 component in a Q register in 1 instruction
2) and ? fabsf() is equivalent to this operation "y.bits = x.bits & 0x7FFFFFFF". Neon can handle its register some as a float vector some as an integer vector so it can do this operation very fast.
 
Last edited by a moderator:
I see two problems with you current sine function:

1) The look up table cannot be loaded onto the NEON.
2) It uses functions like fabsf(); which are executed on the CPU not the NEON.

1) why ? NEON can load a vector of 4 component in a Q register in 1 instruction
2) and ? fabsf("] is equivalent to this operation "y.bits = x.bits & 0x7FFFFFFF". Neon can handle its register some as a float vector some as an integer vector so it can do this operation very fast.[/quote]I stand corrected, nice :)
 
Last edited by a moderator:
float xx = ax * ax;
float r = __sinf_lut[0];
r = r * xx + __sinf_lut[1];
r = r * xx + __sinf_lut[2];
r = r * xx + __sinf_lut[3];
r = r * ax;
may look like in pseudo NEON :

CODE
Q1 = { ax, ax, ax, ax }

Q0 = { -0.00018542f, +0.00831430f, -0.16666000f, +1.00000000f }

Q2 = Q1 * Q1 // { ax*ax, ax*ax, ax*ax, ax*ax }

Q0 = Q0 * Q1 // { -0.00018542f * ax, +0.00831430f * ax, -0.16666000f * ax, +1.00000000f * ax }

Q3 = Q2 * Q2 // { ax*ax*ax*ax, ax*ax*ax*ax, ax*ax*ax*ax, ax*ax*ax*ax }

Q2 = Q2 * Q0[1] // { -ax*ax*ax*0.16666000f, ... }

Q4 = Q3 * Q2 // { ax*ax*ax*ax*ax*ax, ax*ax*ax*ax*ax*ax, ax*ax*ax*ax*ax*ax, ax*ax*ax*ax*ax*ax }

Q3 = Q3 * Q0[2] // { ax*ax*ax*ax*ax*0.00831430f, ... }

Q4 = Q4 * Q0[3] // { -ax*ax*ax*ax*ax*ax*ax*0.00018542f, ... }

Q1 = Q2 + Q3 // { -ax*ax*ax*0.16666000f + ax*ax*ax*ax*ax*0.00831430f, ... }
Q2 = Q0 + Q4 // { ax*1.00000000f - ax*ax*ax*ax*ax*ax*ax*0.00018542f, ... }
Q0 = Q1 + Q2 // { ax*1.00000000f - ax*ax*ax*0.16666000f + ax*ax*ax*ax*ax*0.00831430f - ax*ax*ax*ax*ax*ax*ax*0.00018542f, ... }

r = Q0[0]
 
Last edited by a moderator:
QUOTE
well at least, I agree with you if you're saying the posted algorithm may not be optimal for NEON.
Yea thats because in that code i'm using the Horner method to evaluate the polynomial which doesn't vectorise very well, the alternative is Estrins method:

z = ax * ax;
a = __sinf_lut[3] + __sinf_lut[2] * z;
b = __sinf_lut[1] + __sinf_lut[0] * z;
c = 0 + z * z;
d = a + b * c;
r = d * ax;

We can evaluate a, b & c in parrallel. I think we can do 2x FMAC in a single cycle. I guess it would look something like this:

CODE
Q0 = {ax, ax, ax, ax}
Q1 = Q0 * Q0 // Q1 = {ax ^ 2, ax ^ 2, ax ^ 2, ax ^ 2}
Q2 = { +0.99999661, +0.00830636, 0, 0} // Q2 = {p1, p5, 0, 0}
Q3 = { -0.16664831, -0.00018365, 0, 0} // Q3 = {p3, p7, 0, 0}
Q3[2] = Q1[0] // Q3 = {p3, p7, ax ^ 2, 0}
Q2 = Q2 + Q3 * Q1 // Q2 = {p1 + p3 * (ax ^ 2), p5 + p7 * (ax ^ 2), ax ^ 4, 0}
Q2[0] = Q2[0] + Q2[1] * Q2[2] // Q2[0] = p1 + p3 * (ax ^ 2) + (p5 + p7 * (ax ^ 2)) * (ax ^ 4)
Q2[0] = Q2[0] * Q0[0] // Q2[0] = p1 * ax + p3 * (ax ^ 3) + p5 * (ax ^ 5) + p7 * (ax ^ 7)
r = Q2[0]


Edit: BTW new coefficients get me within 0.000001 of sinf().
 
Last edited by a moderator:
[quote Adventus]Yea thats because in that code i'm using the Horner method to evaluate the polynomial which doesn't vectorise very well, the alternative is Estrins method:

z = ax * ax;
a = __sinf_lut[3] + __sinf_lut[2] * z;
b = __sinf_lut[1] + __sinf_lut[0] * z;
c = 0 + z * z;
d = a + b * c;
r = d * ax;

We can evaluate a, b & c in parrallel. I think we can do 2x FMAC in a single cycle. I guess it would look something like this:

Code:
Q0 = {ax, ax, ax, ax}					
Q1 = Q0 * Q0  // Q1 = {ax ^ 2, ax ^ 2, ax ^ 2, ax ^ 2}
Q2 = { +0.99999661, +0.00830636, 0, 0} // Q2 = {p1, p5, 0, 0} 
Q3 = { -0.16664831, -0.00018365, 0, 0} // Q3 = {p3, p7, 0, 0}
Q3[2] = Q1[0] // Q3 = {p3, p7, ax ^ 2, 0}
Q2 = Q2 + Q3 * Q1 // Q2 = {p1 + p3 * (ax ^ 2), p5 + p7 * (ax ^ 2), ax ^ 4, 0}
Q2[0] = Q2[0] + Q2[1] * Q2[2] // Q2[0] = p1 + p3 * (ax ^ 2) + (p5 + p7 * (ax ^ 2)) * (ax ^ 4)
Q2[0] = Q2[0] * Q0[0] // Q2[0] = p1 * ax + p3 * (ax ^ 3) + p5 * (ax ^ 5) + p7 * (ax ^ 7) 
r = Q2[0]

Edit: BTW new coefficients get me within 0.000001 of sinf().
[/quote]

Please, take into account the fact NEON is a vectorial processor and not a scalar processor so operation like Qi[x] = Qj[y] ¤ Qk[z] is not possible (unless I'm wrong).

Code:
MOVW R1, %lo(__sinf_lut)
VDUP.F32 Q0, D0[0]	   // Q0 = { D0, D1 } = { ax^1, ax^1, ax^1, ax^1 } 
MOVT R1, %hi(__sinf_lut) // R1 = __sinf_lut
VMUL.F32 D2, D1, D1	  // D2 = D1 * D1 // { ax^2, ax^2 }
VLD1.32 {D4, D5}, [R1]   // Q2 = { D4, D5 } = { p1, p5, p3, p7 } <-- note that you only need to reorder __sinf_lut constants in compile-time
VMUL.F32 D3, D2, D2	  // D3 = D2 * D2 // { ax^4, ax^4 }
VMUL.F32 Q0, Q2, D0[0]   // Q0 = { D0, D1 } = { p1*ax^1, p5*ax^1, p3*ax^1, p7*ax^1 }
VMLA.F32 D0, D2, D1	  // D0 = { p1*ax^1 + p3*ax^3, p5*ax^1 + p7*ax^3 }
VMLA.F32 D0, D3, D0[1]   // D0[0] = p1*ax^1 + p3*ax^3 + p5*ax^5 + p7*ax^7

EDIT: VMLA.F32 D0, D2, D5 should be VMLA.F32 D0, D2, D1
 
Please, take into account the fact NEON is a vectorial processor and not a scalar processor so operation like <b>Qi[x] = Qj[y] ¤ Qk[z]</b> is not possible (unless I'm wrong).
Woops sorry about that, i was thinking there was a pairwise multiply.... but there's only a pairwise add. Clearly the VFP could do it but that would be slow. BTW i think there is a slight error in your code, The 2nd last line should read: VMLA.F32 D0, D2, D1.

It appears the range reduction and quadrant checking are much more significant than the polynomial evaluation. Using some bit manipulation I managed to get the quadrant checking to a branchless 10 instructions, however i haven't been able to parallelize anything yet. From what I've read, it seems avoiding branches in NEON / VFP is crucial, Every time you do a comparison you have to stall and send the flags to the ARM flag register....

Anyway here's what I've got so far. Please understand that i have never written any ARM assembly before and i can't test it yet, so i'm bound to have made plenty of mistakes:

C version:
Code:
const float __sinf_range = M_PI / 2.0;
const float __sinf_invrange = 2.0 / M_PI;

const float __sinf_lut[4] = {
	-0.00018365f,	//p7
	-0.16664831f,	//p3
	+0.00830636f,	//p5
	+0.99999661f,	//p1
};

float sinf_c(float x){

	union {
		float 	f;
		int 	i;
	} ax;
	
	float r, a, b, xx;
	int k, t;
	
	ax.f = fabsf(x);

	//Range Reduction:
	k = (int) (ax.f * __sinf_invrange);	
	ax.f = ax.f - (((float)k) * __sinf_range);

	//Test Quadrant
	t = k & 1;
	ax.f = ax.f - t * __sinf_range;
	k = (k&2) >> 1;
	ax.i = ax.i ^ ((t ^ k ^ (x < 0)) << 31);
		
	//Taylor Polynomial (Estrins)
	xx = ax.f * ax.f;	
	a = (__sinf_lut[0] * ax.f) * xx + (__sinf_lut[2] * ax.f);
	b = (__sinf_lut[1] * ax.f) * xx + (__sinf_lut[3] * ax.f);
	xx = xx * xx;
	r = b + a * xx;
	
	return r;
}

NEON version:
Code:
float sinf_neon(float x)
{
	float r = 0;
	volatile asm (
	"vdup.f32 		d0, %1					\n\t"	//d0 = {x, x}
	"vabs.f32 		d1, d0					\n\t"	//d1 = {ax, ax}
	
	//Range Reduction:
	"vdup.f32 		d3, %3					\n\t"	//d3 = {__sinf_invrange, __sinf_invrange}
	"vmul.f32 		d2, d1, d3				\n\t"	//d2 = d1 * d3 
	"vcvt.u32.f32 	        d2, d2					\n\t"	//d2 = (int) d2
	"vcvt.f32.u32 	        d4, d2					\n\t"	//d4 = (float) d2
	"vdup.f32 		d5, %2					\n\t"	//d5 = {__sinf_range, __sinf_range}
	"vmls.f32 		d1, d4, d5				\n\t"	//d1 = d1 - d4 * d5
	
	//Checking Quadrant:
	//ax = ax - (k&1) * M_PI_2
	"vand.i32 		d3, d2, #0x00000001		        \n\t"	//d3 = d2 & 0x1
	"vcvt.f32.u32 	        d4, d3					\n\t"	//d4 = (float) d3
	"vmls.f32 		d1, d5, d4				\n\t"	//d1 = d1 - d5 * d4
	
	//ax = ax ^ ((k & 1) ^ (k & 2 >> 1) ^ (x < 0) << 31)
	"vand.i32 		d4, d2, #0x00000002		        \n\t"	//d4 = d2 & 0x2
	"vshr.u32 		d4, d4, #1				\n\t"	//d4 = d4 >> 1
	"vclt.f32 		d5, d0, #0				\n\t"	//d5 = (d0 < 0.0)
	"veor.i32 		d5, d5, d4				\n\t"	//d5 = d5 ^ d4
	"veor.i32 		d5, d5, d3				\n\t"	//d5 = d5 ^ d3
	"vshl.i32 		d5, d5, #31				\n\t"	//d5 = d5 << 31
	"veor.i32 		d1, d1, d5				\n\t"	//d1 = d1 ^ d5
	
	//polynomial:
	"vmul.f32 		d2, d1, d1				\n\t"	//d2 = d1*d1 = {x^2, x^2}	
	"vld1.32 		{d4, d5}, [%4]			        \n\t"	//d4 = {p7, p3}, d5 = {p5, p1}
	"vmul.f32 		d3, d2, d2				\n\t"	//d3 = d2*d2 = {x^4, x^4}		
	"vmul.f32 		q0, q2, d1[0]			        \n\t"	//q0 = q2 * d1[0] = {p7x, p3x, p5x, p1x}
	"vmla.f32 		d1, d0, d2[0]			        \n\t"	//d1 = d1 + d0*d2 = {p5x + p7x^3, p1x + p3x^3}		
	"vmla.f32 		d1, d3, d1[0]			        \n\t"	//d1 = d1 + d3*d0 = {p5x + p7x^3 + p5x^5 + p7x^7, p1x + p3x^3 + p5x^5 + p7x^7}		
	"vmov.f32 		%0, s3					\n\t"	//r = s3

	: "=r"(r)
	: "r"(x), "r"(__sinf_range), "r"(__sinf_invrange), "r"(__sinf_lut) 
        : "d0", "d1", "d2", "d3", "d4", "d5"
	);
	
	return r;
}

PS: I'm liking this syntax highlighting.
 
Adventus said:
It appears the range reduction and quadrant checking are much more significant than the polynomial evaluation. Using some bit manipulation I managed to get the quadrant checking to a branchless 10 instructions, however i haven't been able to parallelize anything yet. From what I've read, it seems avoiding branches in NEON / VFP is crucial, Every time you do a comparison you have to stall and send the flags to the ARM flag register....

Not an expert - but can't you use the VTBL/VTBX instructions to avoid branches? It obviously depends exactly what you are trying to do - and it won't work if the aim is not to write anything at all, but if the choice is "pick A if <0 or B otherwise", then I think you can do it all in NEON, although the scheduling might be a bit of a pain.
 
Last edited by a moderator:
Back
Top