Efficiency Cornered


darkblu

Active Member
Joined
Oct 6, 2008
Messages
640
this thread is about arhitectural efficiency. efficiency measured, not efficiency proclaimed (this is what the thread title signifies).

recently, in a local thread about neon, i blabbered 'hmm, i wonder how 603e would fare in comparison to cortex A8 in an cache- and fp-heavy task?', to which Exophase offered his guess, 'most likely not favorably as the g2 would take all kinds of hits from its inadequate caches' [ed: quoted in principle, not verbatim], to which we both agreed, and moved on with the actual topic at hand.

well, not quite. we are coders, and questions like that hount us. no matter how much we scrutinize specs and documentations, we always know at the backs of our minds that the picture is larger, and devils populate all its painted nooks and crannies. lots and lots of devils.

curiosity is what drives us in most of our intellectual endeavors. it is out meta-intelligence - the lever by which we, developers, can pull wild ideas into existance: can this spectacular rendition be made to run at 30fps on this handheld? will my daring emulator project be ever able to run at playable framerates? will my gargantuan computational job finish by next friday? - all these are questions that a little curiosity channelled into practical research can, if not provide the defintive answers, then at least give us an educated guess in advance.

so, in this thread i, for one, plan to throw different computational 'test cases' at various architectures i care about (essentially lean, mean & power-efficient ones - the pandora kind), publish the results, and debate what works and what does not on those architectures. i also hope for fellow coders to join in and an interesting discussion to form. last but not least, i expect plain tech-savvy readers to voice intelligent questions, as these boards hide more inquisitve minds than what might first meet the eye. /wipes glasses

please, note this is not intended as some sort of a computational colliseum - there won't be champions proclaimed and losers boo'ed.

also, i will try to put the emphasis on practical tasks, particularly such where a compiler is involved, as at this day and age i value compilers a lot. for the record, i wrote my first few 3d software rasterizers in pure assembly, but that was really long ago and i was stupid.. likewise, compilers were not particularly smart back then.


ok, let me begin this curiosity trip with a return to that original question: how would a SoC-class powerpc chip - 603e (also widely known as G2) - fare in a fp-intesive task. it just happens so that i have an efika board at hand, hosting a freescale MPC5200B SoC - a 400MHz 603e paired with an integrated DDR 133MHz controller, a flexible DMA engine and a bunch of other useful bits enclosed in a tight, sexy package, not much larger than the area of my hand. think a pandora and 1/2.

for the purspose, i'll throw at it my 4x4 matrix mutltiplication routine, i.e. all several variations of it, and see what kind of IPC (instructions-per-clock) can an old man squeeze out of this puppy .. ok, this did not come out rigth, but nevermind, bear with me.

the test code
Code:
#include "rendPlatform.hpp"
#include "rendVect.hpp"

#include <stdio.h>

static unsigned kRepetitions = 10000000;

const unsigned kAlignBoundary = 16;
const unsigned kAlignPad = kAlignBoundary - 1;

////////////////////////////////////////////////////////////////////////////////
// testee routine
////////////////////////////////////////////////////////////////////////////////

rend::matx4 a __attribute__ ((aligned (16)));
rend::matx4 b __attribute__ ((aligned (16)));

#ifdef  __DEFAULT__

rend::matx4 c __attribute__ ((aligned (16)));

#else

float c[4][4] __attribute__ ((aligned (16)));

inline static void
mmul(
	float (&c)[4][4],
	const rend::matx4 &a,
	const rend::matx4 &b)
{
	for (unsigned i = 0; i < 4; i++)
	{
		register float ai[4][4];

		for (unsigned j = 0; j < 4; j++)
			ai[j][0] = ai[j][1] = ai[j][2] = ai[j][3] = a[i][j];

		for (unsigned k = 0; k < 4; k++)
			c[i][k] = ai[0][k] * b[0][k];

		for (unsigned j = 1; j < 4; j++) 
			for (unsigned k = 0; k < 4; k++) 
				c[i][k] += ai[j][k] * b[j][k];
	}   
}

#endif // __DEFAULT__

int main(int argc, char * const argv[])
{
	double freq = rend::timer_freq();

	printf("timer frequency %.3f MHz\n", freq / 1e6);

	a.rotate(M_PI_2, 0.f, 0.f, 1.f);
	b.translate(0.f, 0.f, 8.f);

	unsigned ndz; // non-deterministic zero
	printf("enter a zero: ");
	if (1 != scanf("%u", &ndz)) // user expected to punch in a zero here
		return -1;

	const unsigned ndf = ndz ? 1 : 0; // non-deterministic factor: it is meant to be zero, but the compiler does not know that

	assert(kRepetitions);
	unsigned r = kRepetitions;

	const unsigned long long t0 = rend::timer();

	do
	{

#ifdef  __DEFAULT__

		(*(&c + ndf * r)).mul(*(&a + ndf * r), *(&b + ndf * r));

#else

		mmul(*(&c + ndf * r), *(&a + ndf * r), *(&b + ndf * r));

#endif
	}
	while (--r);

	const unsigned long long t1 = rend::timer();
	const unsigned long long dt = t1 - t0;

	const double sec = double(dt) / freq;

	printf("time: %f sec, repetitions: %d\n", sec, kRepetitions);

	printf(
		"%f %f %f %f\n"
		"%f %f %f %f\n"
		"%f %f %f %f\n"
		"%f %f %f %f\n",
		c[0][0], c[0][1], c[0][2], c[0][3],
		c[1][0], c[1][1], c[1][2], c[1][3],
		c[2][0], c[2][1], c[2][2], c[2][3],
		c[3][0], c[3][1], c[3][2], c[3][3]);

	return r;
}

what the above does is obvious: it does 10^7 multiplications of the same two 4x4 matrices, in a way that the compiler cannot optimize out. time of that is measured and reported, together with the resulting matrix (for verification). the 'human touch' in there is meant to feed in a constant of which the compiler does not know in advance, and is written in this manner just for giggles, not because there are not any steady sources of zeros in a computer system.

Code:
// relevant external routines quoted next ///////////////////////////////////////

template < unsigned DIMENSION_T, class SUBCLASS_T >
template < class ANYCLASS0_T, class ANYCLASS1_T >
SUBCLASS_T& protomatx< DIMENSION_T, SUBCLASS_T >::mul(const protomatx< DIMENSION_T, ANYCLASS0_T >& mat0,
													  const protomatx< DIMENSION_T, ANYCLASS1_T >& mat1)
{
#if defined(__MATX_MUL_V2__) && defined(__MADD__)

	for (unsigned i = 0; i < DIMENSION_T; i++)
	{
		register float swiz[DIMENSION_T][DIMENSION_T];

		for (unsigned j = 0; j < DIMENSION_T; j++)
			for (unsigned k = 0; k < DIMENSION_T; k++)
				swiz[j][k] = mat0[i][j];

		for (unsigned k = 0; k < DIMENSION_T; k++)
			m[i][k] = swiz[0][k] * mat1[0][k];

		for (unsigned j = 1; j < DIMENSION_T; j++)
			for (unsigned k = 0; k < DIMENSION_T; k++)
				m[i][k] += swiz[j][k] * mat1[j][k];
	}

#else

	for (unsigned i = 0; i < DIMENSION_T; i++)
		for (unsigned j = 0; j < DIMENSION_T; j++)
			m[i][j] = rend::dp< DIMENSION_T, 1, DIMENSION_T >(mat0[i], mat1[0] + j);

#endif

	return *this;
}


// a strided, size-agnostic vector dot-product routine used above //////////////////////////////////////////

template < unsigned DIMENSION_T,
		   unsigned STRIDE0_T,
		   unsigned STRIDE1_T >

float dp(const float * const v0,
		 const float * const v1)
{
	assert(DIMENSION_T > 0);
	assert(STRIDE0_T > 0 && STRIDE1_T > 0);

#ifdef __MADD__

	float r = v0[0] * v1[0];

	for (unsigned i = 1; i < DIMENSION_T; i++)
		r += v0[STRIDE0_T * i] * v1[STRIDE1_T * i];

	return r;

#else

	float r[DIMENSION_T];

	for (unsigned i = 0; i < DIMENSION_T; i++)
		r[i] = v0[STRIDE0_T * i] * v1[STRIDE1_T * i];

	for (unsigned i = 1; i < DIMENSION_T; i++)
		r[0] += r[i];

	return r[0];

#endif
}

the compiler used for this test: gcc (GCC) 4.2.1 (SUSE Linux)

Code:
$ gcc -x c++ -pipe -Wno-trigraphs -fno-exceptions -fno-rtti -D__ppc__ -mcpu=603e -mtune=603e -O3 -Winline -Wreturn-type -Wformat -Wunused-variable -Wuninitialized -Wunknown-pragmas -Wsign-compare -fmessage-length=0 -funroll-loops -ffast-math -fstrict-aliasing -fvisibility=hidden -fvisibility-inlines-hidden -fno-threadsafe-statics -D__DEFAULT__ -D__MADD__ -D__MATX_MUL_V2__ main.cpp -o main -lm

$ ./main 
timer frequency 33.000 MHz
enter a zero: 0
time: 11.618942 sec, repetitions: 10000000
-0.000000 -1.000000 0.000000 0.000000
1.000000 -0.000000 0.000000 0.000000
0.000000 0.000000 1.000000 8.000000
0.000000 0.000000 0.000000 1.000000

^^^ templetized version of the 'testee routine', in this case differing from the original only by the initial factors initialization - here it is looped, whereas in testee it is unrolled.


Code:
$ gcc -x c++ -pipe -Wno-trigraphs -fno-exceptions -fno-rtti -D__ppc__ -mcpu=603e -mtune=603e -O3 -Winline -Wreturn-type -Wformat -Wunused-variable -Wuninitialized -Wunknown-pragmas -Wsign-compare -fmessage-length=0 -funroll-loops -ffast-math -fstrict-aliasing -fvisibility=hidden -fvisibility-inlines-hidden -fno-threadsafe-statics -Dd__DEFAULT__ -D__MADD__ -D__MATX_MUL_V2__ main.cpp -o main -lm

$ ./main 
timer frequency 33.000 MHz
enter a zero: 0
time: 10.879514 sec, repetitions: 10000000
-0.000000 -1.000000 0.000000 0.000000
1.000000 -0.000000 0.000000 0.000000
0.000000 0.000000 1.000000 8.000000
0.000000 0.000000 0.000000 1.000000

^^^ testee routine

Code:
$ gcc -x c++ -pipe -Wno-trigraphs -fno-exceptions -fno-rtti -D__ppc__ -mcpu=603e -mtune=603e -O3 -Winline -Wreturn-type -Wformat -Wunused-variable -Wuninitialized -Wunknown-pragmas -Wsign-compare -fmessage-length=0 -funroll-loops -ffast-math -fstrict-aliasing -fvisibility=hidden -fvisibility-inlines-hidden -fno-threadsafe-statics -D__DEFAULT__ -Dd__MADD__ -D__MATX_MUL_V2__ main.cpp -o main -lm

$ ./main 
timer frequency 33.000 MHz
enter a zero: 0
time: 7.486614 sec, repetitions: 10000000
-0.000000 -1.000000 0.000000 0.000000
1.000000 -0.000000 0.000000 0.000000
0.000000 0.000000 1.000000 8.000000
0.000000 0.000000 0.000000 1.000000

^^^ 'default matrix mul routine', non-madd-intended version

Code:
$ gcc -x c++ -pipe -Wno-trigraphs -fno-exceptions -fno-rtti -D__ppc__ -mcpu=603e -mtune=603e -O3 -Winline -Wreturn-type -Wformat -Wunused-variable -Wuninitialized -Wunknown-pragmas -Wsign-compare -fmessage-length=0 -funroll-loops -ffast-math -fstrict-aliasing -fvisibility=hidden -fvisibility-inlines-hidden -fno-threadsafe-statics -D__DEFAULT__ -D__MADD__ -Dd__MATX_MUL_V2__ main.cpp -o main -lm

$ ./main 
timer frequency 33.000 MHz
enter a zero: 0
time: 7.485850 sec, repetitions: 10000000
-0.000000 -1.000000 0.000000 0.000000
1.000000 -0.000000 0.000000 0.000000
0.000000 0.000000 1.000000 8.000000
0.000000 0.000000 0.000000 1.000000

^^^ 'default matrix mul routine', madd-intended version

a few words about the different versions:

* the 'testee' version is an auto-vectorization-friendly one, relies on the presence of madd op, and performs quite well on the bigger-class cpu's.

* the 'default' variants, one madd-friendly, and the other not so, are in fact the same outermost loop around a generic sparse-vector dot-product routine, which routine is actually the one versioned against madd op.

what observations can be made from the above tests:

the testee version clearly is not the choice for the 603e. the two default versions, OTOH, perform identically, and for a good reason too: their generated code is essentially identical, save for a tiny bit of different scheduling. so our effort to hint the compiler of madds usage was not needed here - both versions heavily employ madd ops (asm listings upon request, as this post is becoming overloaded with code).

for the IPC part: a 4x4 matrix multiplications consists of 112 elementary arithmetic ops (namely, multiplication and addition), so 10^7 of those in the span of 7.485850 seconds @ 396MHz (the actual clock of the MPC5200B) is 0.37781719221 flops/clock, or in other words, an execution rate of ~2 flops/ 5 clocks.

for reference, a G3 ppc is spec-rated 3 flops/5 clocks, and G4 ppc is rated at 4 flops/ 5 clocks. notice the progression ;p

coming next: same test on a pandora.

update

results from an 800MHz A8 with RunFast, scalar code by gcc (Ubuntu 4.4.1-4ubuntu9) 4.4.1:

Code:
$ gcc -O3 -x c++ -ffast-math -mcpu=cortex-a8 -mfloat-abi=softfp -mfpu=neon main.cpp -lstdc++ -lrt -D__DEFAULT__ -D__MADD__

$ a.out
timer frequency 1000.000 MHz
enter a zero: 0
time: 11.510258 sec, repetitions: 10000000
-0.000000 -1.000000 0.000000 0.000000
1.000000 -0.000000 0.000000 0.000000
0.000000 0.000000 1.000000 8.000000
0.000000 0.000000 0.000000 1.000000

IPC: 0.121552 flops/clock, or approx. 8 clocks/flop. I believe that's pretty close to the theoretical limit of A8's RunFast VFPv3. Also, softfp is of no concern to this test, as no function calls occur during the timed sections, and generally no ARM registers get to hold any fp at any moment there.

coming next: auto-vectorized form.
 
Last edited by a moderator:
darkblu said:
also, i will try to put the emphasis on practical tasks, particularly such where a compiler is involved, as at this day and age i value compilers a lot. for the record, i wrote my first few 3d software rasterizers in pure assembly, but that was really long ago and i was stupid.. likewise, compilers were not particularly smart back then.
Big assumption, why don't you back that one up? A lot of us are still getting 1.5x or better performance out of our hand optimized assembly code (ask notaz..) on ARM. That might not be the case on other architectures, which is all the more reason why it skews a comparison like this.

Especially when from the start you're using floats and are expecting the compiler to vectorize it. I don't know, maybe it will, but if it doesn't it's obvious that the PPC is going to have a huge advantage over the Cortex-A8. And even if the compiler does, I especially expect you to be able to do a better job. Maybe after you do the Pandora one I should do an ARM/NEON version and we can see..? This is such a simple synthetic task that you can probably calculate the amount of time it should take without even writing it, and contrary to what's perhaps the purpose of this thread I don't think the results will surprise you. If you write it correctly, of course.

Besides, wasn't the point to see if we were right about the cache characteristics playing a large role in comparison of the two chips? So why are you benchmarking code that multiples the same 4x4 matrix over and over again? Try multiplying two 512x512 matrices instead, perhaps.
 
Last edited by a moderator:
'Exophase' said:
Try multiplying two 512x512 matrices instead, perhaps.

This wouldn't prove much if your compiler is smart enough to do blocking :p

I'm part of the crowd who thinks that for many "low-level" tasks assembly is the way to go, especially when you have SIMD instructions or other strange instructions.

Anyway what you propose, darkblu, is interesting :)
 
Last edited by a moderator:
'darkblu' said:
this thread is about arhitectural efficiency. efficiency measured, not efficiency proclaimed (this is what the thread title signifies).

[..]

coming next: same test on a pandora ; )
I for one would suggest to use handcrafted assembler code on ARM for all time-critical and processor intensive routines. The processor is very good, has lots of registers and due to barrel shifter etc. offers operations like 32 x 32 bit + 64 bit value and shifting result by several bits within one instruction. In many cases, generic algorithms written in C/C++ can't be optimized as much by the compiler as can architecture-specific handcrafted assembler code.
On RiscOS for example displaying of jpeg-images was done by locally unpacking the jpg-files on the fly because it was fast enough and much less memory consuming than fully unpacking it. This eg. allowed to even display a 17MB jpeg (10000x10000) on a 64MB RiscPC with 200MHz StrongARM (ah, those were the times...).

Just my two cents (Euro), and I haven't done many things on ARM (mainly 6502 (8bit), i386 architecture (horrors) and 68k from motorola (quite nice)).
 
Last edited by a moderator:
Great thread dear friends :)

But it definatley seems to rely too much on the compiler implementation and optims..

Sorry, not hadding anything useful, but whated to cheer you on :)

jeff
 
Last edited by a moderator:
hey, what do you know - a discussion! ; )

in chronological order:

Exophase said:
Big assumption, why don't you back that one up? A lot of us are still getting 1.5x or better performance out of our hand optimized assembly code (ask notaz..) on ARM. That might not be the case on other architectures, which is all the more reason why it skews a comparison like this.

oh, just trust me on that one - i was stupid : )
ok, maybe i should have explained better my choice of sticking with a proven cc. i will try to do that now.

first off, i never abandoned assembly - to this day and age i actively pursue close understanding of underlying ISAs and data paths. but situations where my hand-crafted asm code outperforms the compiler's basically constitute to two categories:

* when the task at hand is really skewed against the parcticular architecture.
* when the compiler (read: backend) for that architecture is immature.

in most other scenarios the best i can do is compiler levels. and that is totally understandable, as:

* i am not as good at inlining/loop reduction and interleaving for optimal scheduling as a good modern compiler is.
* i am not as meticulous at trivial things as spotting all beneficial expression re-factorings as a compiler is.
* i am not as up-to-date on the latest architectural tweaks as an actively developed compiler is.

but that is purely from the code performance side. there is also the "programmer performance" aspect, which, as you know, i consider quite important. the fact that i can get existing routines, re-compile them on a new architecture and achieve close-to-optimal performance (when the situation is not one of the two exceptions mentioned above) at low or no development cost is darn nice.

Exophase said:
Especially when from the start you're using floats and are expecting the compiler to vectorize it. I don't know, maybe it will, but if it doesn't it's obvious that the PPC is going to have a huge advantage over the Cortex-A8. And even if the compiler does, I especially expect you to be able to do a better job. Maybe after you do the Pandora one I should do an ARM/NEON version and we can see..? This is such a simple synthetic task that you can probably calculate the amount of time it should take without even writing it, and contrary to what's perhaps the purpose of this thread I don't think the results will surprise you. If you write it correctly, of course.

i do admit i do not expect the A8 to fare very well in this particular routine (who'd have thought, eh?). its compiler backend does need hammering, more so in the auto-vectorization department. but again, the purpose of this thread is to see what goes and what does not on each particular architecture, and by my ulterior motives, to see how good its present compilers are. that said, i don't mind putting my newly-acquired low-level arm skills to the test ; )

QUOTE

Besides, wasn't the point to see if we were right about the cache characteristics playing a large role in comparison of the two chips? So why are you benchmarking code that multiples the same 4x4 matrix over and over again? Try multiplying two 512x512 matrices instead, perhaps.
valid point about the original question. although the current test reads its matrices on each iteration (that was part of reason behind tricking the compiler into thinking they are different matrices), the cache hit ratio is effectively 100% even for 603's tiny caches. so this test is not meant to show cache-miss / memory controller performance, but it's a good measure of the performance of the compiler + scalar fpu combo.

as per larger matrices - laurent already commented on this one. given the cache situation remains unchanged, don't expect notable performance differences - these will be still the same ops that will be scheduled similarly.

Laurent said:
I'm part of the crowd who thinks that for many "low-level" tasks assembly is the way to go, especially when you have SIMD instructions or other strange instructions.

well, good sir, simd is no stranger to compilers ;p

auto-vectorization is slowly getting there ™
tramp said:
I for one would suggest to use handcrafted assembler code on ARM for all time-critical and processor intensive routines. The processor is very good, has lots of registers and due to barrel shifter etc. offers operations like 32 x 32 bit + 64 bit value and shifting result by several bits within one instruction. In many cases, generic algorithms written in C/C++ can't be optimized as much by the compiler as can architecture-specific handcrafted assembler code.

as you'd guess, i beg to disagree with your last sentence: generic algorithms (with little amount of customizations) can be optimized beyond humanly-viable by the compiler as long as the latter is mature enough.

that said, reverting to asm whenever necessary is natural. and part of the purpose of this thread is to try to pinpoint those necessities.

tramp said:
On RiscOS for example displaying of jpeg-images was done by locally unpacking the jpg-files on the fly because it was fast enough and much less memory consuming than fully unpacking it. This eg. allowed to even display a 17MB jpeg (10000x10000) on a 64MB RiscPC with 200MHz StrongARM (ah, those were the times...).
allow me a wild guess: the reason the above was written in asm (i assume) was that at the time the compiler for that architecture was not on par. otherwise i see nothing in the above task that says "asm only". see, one of the issues with "non-desktop-dominant" architectures is that their compilers don't get as much attention. i hope to see that changing soon with the advent of power-efficient computing.

skeezix said:
Great thread dear friends
But it definatley seems to rely too much on the compiler implementation and optims..
we rely on the compiler as much as the compiler is reliable ™

: )

thanks for the encouragement!
 
Last edited by a moderator:
Laurent said:
This wouldn't prove much if your compiler is smart enough to do blocking :p
Actually it'd prove a lot, when you're comparing a platform with L2 cache vs one without, especially with a recursive algorithm. I'd prefer that the algorithm itself did blocking (I really doubt the compiler would do it but I wouldn't mind being wrong).

Maybe just multiplying a lot of different 4x4 matrices would show something more real world. That'd expose the main memory characteristics more as well, if that's what you were getting at.

Another thing that I don't expect the compiler to be able to do is prefetching, and for matrix multiplication that can help a lot.
 
Last edited by a moderator:
re Exophase's cache points:

i just ran the original "default_madd" test with matrices up to 128x128. the result is that as long as no cache misses occur, the IPC of 2flops/5clocks stays steady (i.e. the routine is not r-file limited). things start to slow down at 64x64, where a single matrix's footprint amounts to 16KB - the total amount of the 5200B's d-cache - there the IPC drops to 0.1605 flops/clock (the original 2/5 being 0.4). at 128x128, where a single matrix is 64KB, the IPC hits the bottom at 0.0325 flops/clock.

for the record, the simd-friendly version (aka "testee") goes as far as 6x6, and after that the compiler hits the frame growth limit (apparently the swizzled multiplicand which is meant to sit in the r-file being the offender) and refuses to inline the testee routine anymore, which makes the test pointless. but as much as that testee routine is meant for simd, its early failure in the dimensionality growth is not a concern.
 
Last edited by a moderator:
Wow this was old.

I'm seeing an underlying connecting theme in some of our arguments darkblu ;p
 
Back
Top