GP2X Benchmarking Ram Writes


rixed

Member
Joined
Dec 31, 2005
Messages
206
Age
48
Location
Paris (fr)
Website
happyleptic.org
Im benchmarking memory access from the 940.

Im writting in a RAM region that's uncached and unbuffered.

I was expecting that in this situation, writting 100M times at the same location or at random locations would change nothing.
Surprisingly, this is not the case. Here are, in rought numbers, what I got :

writting 100M words sequentially : 7 seconds
writting 100M words spaced by 321 words : 13 seconds
writting 100M words spaced by 513, 1111, 0x10000 words, or "randomly" : 21 seconds.


I can't think of an explanation for this. I suppose the bus or the RAM organisation is responsible for this, but having no documentation nor much experience with electronics Im stuck.

I need to understand what's happening in order to optimize my software GPU (which is horribly slow).
So, any idea is welcome !
 
rixed posted on Sep 21 2006 at 09:18 PM said:
Im benchmarking memory access from the 940.

Im writting in a RAM region that's uncached and unbuffered.

I was expecting that in this situation, writting 100M times at the same location or at random locations would change nothing.
Surprisingly, this is not the case. Here are, in rought numbers, what I got :

writting 100M words sequentially : 7 seconds
writting 100M words spaced by 321 words : 13 seconds
writting 100M words spaced by 513, 1111, 0x10000 words, or "randomly" : 21 seconds.


I can't think of an explanation for this. I suppose the bus or the RAM organisation is responsible for this, but having no documentation nor much experience with electronics Im stuck.

I need to understand what's happening in order to optimize my software GPU (which is horribly slow).
So, any idea is welcome !

At a guess it looks to be almost certainly cache related.

EDIT: Sorry, just read you said it was uncached and unbuffered. Still sounds cachey to me, but what do I know...

Have a read of the replies by dwelch on this thread:-

http://www.gp32x.de/board/index.php?showtopic=31869
 
Last edited by a moderator:
Vimacs posted on Sep 22 2006 at 12:38 AM said:
maybe gcc is optimizing it?

To cite Dwelch, Do NEVER assume, look at the assembly code :)
 
Last edited by a moderator:
Thanks for talking about me, I feel special...


I did a quick test, actually I started to write a long reply then figured I would just do it first then come back.

The first test:


mov r0,#0x2800
mov r2,#0x90000
add r3,r2,#24
0x70000:
str r1,[r2]
str r1,[r2]
...
str r1,[r2]
subs r0,r0,#1
moveq pc,lr
mov pc,#0x70000

There is a total of 10000 str instructions, so 0x2800 or 10240 times through the loop and 10000 writes per loop is 102,400,000 total str instructions.

Test two:

mov r0,#0x2800
mov r2,#0x90000
add r3,r2,#24
0x70000:
str r1,[r2]
str r1,[r3]
...
str r1,[r2]
str r1,[r3]
subs r0,r0,#1
moveq pc,lr
mov pc,#0x70000

Same number of writes but they alternate between two addresses 24 apart (I am writing words and the alignment fault bit is set by the bootloader, so I need a nice number, ideally I would have used a prime number here).

I called this code from C and read the timer (in C) before and after. both tests took the same amount of time. It was 9 something seconds, but read on...

I used the ART103 bootloader, made a non-OS (no linux) program that runs on the 920. The cache and wb are not enabled. So for every store to memory, there is a read from memory to fetch the next instruction.
So if you simply divide the execution time by 102,400,000 you are wrong by a factor of at least two.
Second, if you have a small loop like:

mov r0,#0x2800
mov r2,#0x90000
add r3,r2,#24
0x70000:
str r1,[r2]
str r1,[r3]
subs r0,r0,#1
bne 0x70000

The subs fetch, execute, the bne fetch, execute and pipe flush will actually overshadow the writes to memory (and their instruction fetches). So this loop should run I guess half the speed...Let me try right now:

Nope, I am wrong, it looks to be 10 times longer. Will have to think about that a bit.

Anyway, that is why I did a long loop with 10,000 strs, so that the affect of the subs and bne are somewhat insignificant. I didnt actually type str r1,[r2] 10,000 times, I took the disassembly and basically wrote the instructions to memory from the program before running the test:

writeto(0x70000-12,0xe3a00b0a);
writeto(0x70000- 8,0xe3a02809);
writeto(0x70000- 4,0xe2823018);
for(ra=0;ra<10000;ra+=4)
{
writeto(0x70000+ra,0xe5821000);
ra+=4;
writeto(0x70000+ra,0xe5831000);
}
writeto(0x70000+ra,0xe2500001); ra+=4;
writeto(0x70000+ra,0x01a0f00e); ra+=4;
writeto(0x70000+ra,0xe3a0f807); ra+=4;

start=dtime();
jumpto(0x70000-12,0xABCD1234);
stop=dtime();

printf("%lu\n",stop-start);


Thus the easily relocatable end of the loop:

subs r0,r0,#1
moveq pc,lr
mov pc,#0x70000

Instead of a subs, bne

Anyway, my guess from this test puts us at 94ns for a read and write cycle combined or 47ns per cycle assuming they take the same amount of time or around 20 million memory cycles per second, putting memory at 10 times slower than the cpu.

I dont know how far you want to take this. For me the right answer is have cache on for the area you want to run the test loop and cache off for the area you want to test the writes to. The test memory (where you write to) needs the write buffer on.

use a short/small loop like the one above. Again, that loop is in cached memory so it should ultimately run from cache and not affect system memory until the loop is finished. The writes go to the write buffer which will smooth out the writes and allow the subs and bne and such to execute without affecting the timing of the test. Lets put it this way, the write buffer is full (in this case because of addresses not data) the subs comes from cache, one cycle to read it and the execution is one cycle (these are not additive, while one thing is executing another is fetching in the same cycle). So one cycle for the subs, another for the bne, give it maybe two for the pipe to flush and get to the first str in the loop the next time around. So 4 or 5 clocks, a single write from the write buffer has not completed, so that first str in the loop holds the cpu in wait states until the write buffer can finish one of the writes, as soon as that happens one store executes, then next store hangs the cpu waiting another 10 clocks or so for the write buffer then it executes filling the write buffer again, the subs and bne are shadowed by a write by the write buffer and we start over. If the write buffer is not enabled then the subs and bne are going to be counted in the timing.


My guess is that for the random you perhaps calculated the random number in the loop instead of pre-calculating? I would need to see the three loops to get a better guess on what happened in your test.
This kind of thing is not a case of looking to see what asm was generated but write the code in asm.

I assume you were running linux on the 920? The 920 interrupts should be disabled completely (simply write to the register to disable all of them). And the 920 should be placed in a very tight infinite loop, literally a while(1); no polling the keyboard or anything, an infinite loop that will keep it in the cache and off the system memory/ i/o bus. If not the 920 will jump on the bus and affect your timing.

I assume you are doing byte writes (an address difference of 321 has to be byte writes). What are you ultimately planning to do, copy memory? Words go four times faster, Above I had 20million CYCLES per second but 80million BYTES per second because they were word cycles.

its really late here, I am tired, I may do this experiment tomorrow. I am very interested in this topic. I am still hung up on a couple of sentances in the marketing datasheet for this chip. The two cpus, and the peripherals do share a bus, but the bus is supposedly designed so that four things can jump on at once without affecting each other. I dont know what that means. In theory the 920 can read from a register, while the 940 does something, maybe some bank of memory while the video is reading memory to write to the display. Getting a good write/read test like this THEN changing it so the 920 is say polling a register instead of just a while(1); loop. we know the video is already stomping on the bus, perhaps reading up on the video and halting it, that sort of thing...
 
OK, I will post my code this evening if I have some time.
First, when I said the memory was uncached, I was talking about the data cache only. ICache was on.
Second : Yes, linux was running idle on the 920, but with the large cache of the 920 I assumed the impact was slow. The number I had, which looks quite repetitive and close to multiple of the clock frequency, make me feel confident with this assertion.
Third : My code was written in C. Basically, it was something like :

int j=0;
for (i=0; i<100000000; i++) {
j = j + N;
buffer[j&BIG_MASK] = 250; // the value is not meaningfull
}

buffer beeing a pointer to volatile's uint32_t.

And I tried with N=0, 1, 2, 8, 16, 257, 321, 513, etc... What I called 'random' was just doing j=j+i+123456789 instead of j=j+N, the code generated was very similar.
So, when I speack about writting 321 words appart, I actually mean words, not bytes.

Also, I tryed both with or withour the write-buffer, and noticed no difference, which does not surprises me.
I also tried with cache enabled, and it *did*, of course, affect the results.

Enabling the DCache for this region does not interrest me : Im plotting things into very large buffer, and I want to keep the 4Kb of DCache for data stored in structures that are used in the main loop. That's why Im very interrested in this timing. I started coded this thing before receiving the gp2x, and in my expectations the main loop should be running much faster than it actually do, so I started investigation.
This is very surprising (and anoying) that write speed depends on where you write earlier on non-cached memory (apparently you tried writting at the same place or at 2 distincts places from a short distance. Did you try writing 'randomly' (as defined above) ? )


about you loop runing 10 times slower than the unrolled code : wow, another mystery ! Please try this with ICache on your code please !

Anyway, your result of 47ns for a write is very similar to what I get (47ns for a write is similar to what I got for close writes).

Thank you very much for the efforts you already put in !
 
I believe it's possible to lock-in parts of the cache, so if it turns out you need cache enabled to get acceptable performance, you could lock almost all of the cache to the data you expect will be useful, and leave a tiny bit just to keep the system happy.
 
BradN posted on Sep 22 2006 at 04:45 PM said:
I believe it's possible to lock-in parts of the cache, so if it turns out you need cache enabled to get acceptable performance, you could lock almost all of the cache to the data you expect will be useful, and leave a tiny bit just to keep the system happy.


Yes you can lock parts of the cache, so long as interrupts are disabled you wouldnt need to for a small test loop.
Even a good sized test loop would be okay, you would get settled in the cache within the first time or two through the loop and stay there.


David
 
Last edited by a moderator:
What a trip, I see what you mean:

//------------------------------------------------------------------------
//------------------------------------------------------------------------

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include "crt0.h"
#include "video.h"

unsigned char *ucptr;
unsigned long ra;


unsigned long start,stop;

int dommu ( void );

unsigned long i,j;

unsigned long *tdata;
#define BIGMASK 0x7


int test1 ( unsigned long N, unsigned long len )
{
j=0;
for(i=0;i<len;i++)
{
j=j+N;
tdata[j&BIGMASK]=250;
}
return(0);
}

void dotest ( unsigned long X )
{
start=dtime();
test1(X,1000000);
stop=dtime();
printf("%lu\n",stop-start);
}
int main ( void )
{
dommu();
textinit();

printf("Hello World!\n");
printf("0x%08lX\n",CPUID);
printf("0x%08lX\n",CACHEID);
printf("0x%08lX\n",CONREG);

tdata=(unsigned long *)0x03300000;

dotest(0);
dotest(1);
dotest(0);
dotest(3);
dotest(0);
dotest(2);
dotest(0);

return(0);
}
//------------------------------------------------------------------------
//------------------------------------------------------------------------



*tdata points to non-cached memory data cache and icache are on
(need dcache for the constants).


481882 N = 0
592598 N = 1
481889 N = 0
592603 N = 3
481894 N = 0
592601 N = 2
481889 N = 0

1/((481882/7372800)/1000000) = 15.3 million test1 loops per second
1/((592601/7372800)/1000000) = 12.4 million test1 loops per second

With the write buffer off, it is slower, as expected:

827767
1129685
827778
1129687
827757
1129699
827783

1/((827767/7372800)/1000000) = 8.9 million test1 loops per second
1/((1129687/7372800)/1000000) = 6.5 million test1 loops per second

Loop time
65.3ns icache, dcache, write buffer, N=0
80 ns icache, dcache, write buffer, N != 0
112.3ns icache,dcache N = 0 (no write buffer)
153.2ns icache,dcache N != 0 (no write buffer)

.L4:
ldr r3, [lr, #0]
add r3, r0, r3
and r2, r3, #7
str r3, [lr, #0]
str r5, [r1, r2, asl #2]
ldr r3, [ip, #0]
add r3, r3, #1
cmp r4, r3
str r3, [ip, #0]
bhi .L4

65.3ns is 13 clock cycles, 10 instructions, 10 clocks, the two ldrs add two more clocks, total 12 clocks.
The strs all have to go to ram and are all write buffered. So if system memory were fast, this would take 3 more clocks, total 15. hmm, how can it average less, thats not possible.
The str r3,[lr,#0] is where it saves j back to memory (bad compiler...j is not volatile, wait until the loop is over and save it to memory then).
The str r3,[ip,#0] is where i is written back to memory.

i changes every time through the loop, if N=0 j does not change, what if the write-through-cache does not bother to write to memory if it didnt change? You would see the loop run slower if N != 0, which is exactly what we see.

Sure enough the 920 talks about it:

"Each DCache line has two dirty bits, one for the first four words of the line, the other for the last four words, and a single virtual TAG address and valid bit for the entire 8-word line."

"if the line is the target of a DCache clean operation, the dirty bits are used to decide whether the whole, half, or none of the line is written back to memory. "

So that explains why my version of your test varies in execution time depending on N. I just ran the test again with the middle test using 0x10 for N which means j changes but the address written by the str r5[...] does not change, again this all makes sense now...for my test...A half cache line is being written to memory, whenever j changes. I would have expected 20ns per cycle longer not 15ns, but maybe my average is not as accurate as I would like.

You should definitely see a slowdown with the write buffer off, that is the whole point of the write buffer, to return control to the core faster and do the writes in parallel (thus saving time doing two things at once instead of one thing at a time in series). Likewise you said you had the dcache off, that just slows things down, you need the dcache for the two ldr r3[...] instructions, turning it off makes those much slower:

3743818
4067955
3744208
4067886
3744025
4067929
3743842

507us icache, write buffer N = 0, 442ns longer per loop, like 88 clocks. Thats painful.

Now I am running on the 920 not the 940 so I have an MMU in the way, who knows whats up there, that last one doesnt quite make sense, I ran it twice insuring that it was just the dcache enable bit in the cp15 control register that was the only difference.
 
I'm not sure if this is related but I'm pretty sure the Arm Processors have a "Burst Mode" where they write/read 4 sucessive words. This means you don't have to set the RAS/CAS? and speeds up memory access slightly.
 
This is interresting but this is not the same test :

- first, please, for the sake of simplicity, ask your compiler take j in a register. My gcc with some common optimisation flags gives this inner loop :

5ec: e2822e45 add r2, r2, #1104 ; 0x450
5f0: e2822007 add r2, r2, #7 ; 0x7
5f4: e3c234ff bic r3, r2, #-16777216 ; 0xff000000
5f8: e3c33503 bic r3, r3, #12582912 ; 0xc00000
5fc: e0813103 add r3, r1, r3, lsl #2
600: e2833601 add r3, r3, #1048576 ; 0x100000
604: e152000c cmp r2, ip
608: e5830000 str r0, [r3]
60c: 1a000179 bne 5ec <mymain+0x90>

For the C code :

for (unsigned i=0; i<100000000; i++) {
j = (j + 1111);
shared->buffers[j&0x3FFFFF] = 250;
}

- Second : use a bigger buffer ! Obviously with a BIGMASK of 0x7 you can not test writes that are located more than a thouthand words apart.

Then, using the same program as you have there, try

dotest(1);
dotest(10);
dotest(100);
dotest(1000);
dotest(10000);

Then, you will notice something : the MMSP will take longer and longer to do the writes (whatever the write buffer is on or off, which is *expected*, because you never stop writting). I can't explain why it's longer to write 10000 words apart than 1 words apart in this configuration (and I mean very much longer, you will notice it without the need for a hardware clock).


scanti posted on Sep 23 2006 at 11:52 AM said:
I'm not sure if this is related but I'm pretty sure the Arm Processors have a "Burst Mode" where they write/read 4 sucessive words. This means you don't have to set the RAS/CAS? and speeds up memory access slightly.

Yes but it's not applicable here : burst mode is for sequential access (cache line flush, stm/ldm, but not individual str/ldr, whatever the addresses are sequential or not - or so I read somewhere).
 
Last edited by a moderator:
I had a big mask at first and noticed that, at least the way I was testing it it didnt matter if it was one word apart or 10000 the timing was the same. You are running the 940 with linux on the 920 and you found a free spot of memory 0x1000000 bytes long? thats 1/4th of memory. What was your base address? I was slamming into video hardware and control registers so I tightened up the mask and over a series of tests found that it didnt matter how big the mask was, any variation in the address made it slow down by the same amount.

I will certainly understand if you are tired of the back and forth between us on this topic. If not and you are still interested in help could you post something more complete? is this loop within its own function or buried in a large program? Can you isolate this test to a small program, and/or at least a small function like the test1() function I sent out? This way its apples and apples instead of apples and oranges.
 
dwelch posted on Sep 23 2006 at 06:26 PM said:
I had a big mask at first and noticed that, at least the way I was testing it it didnt matter if it was one word apart or 10000 the timing was the same. You are running the 940 with linux on the 920 and you found a free spot of memory 0x1000000 bytes long? thats 1/4th of memory. What was your base address? I was slamming into video hardware and control registers so I tightened up the mask and over a series of tests found that it didnt matter how big the mask was, any variation in the address made it slow down by the same amount.

I will certainly understand if you are tired of the back and forth between us on this topic. If not and you are still interested in help could you post something more complete? is this loop within its own function or buried in a large program? Can you isolate this test to a small program, and/or at least a small function like the test1() function I sent out? This way its apples and apples instead of apples and oranges.

doh, your code is right there, let me look at it, sorry.
 
Last edited by a moderator:
rixed posted on Sep 23 2006 at 10:41 AM said:
This is interresting but this is not the same test :

5ec: e2822e45 add r2, r2, #1104 ; 0x450
5f0: e2822007 add r2, r2, #7 ; 0x7
5f4: e3c234ff bic r3, r2, #-16777216 ; 0xff000000
5f8: e3c33503 bic r3, r3, #12582912 ; 0xc00000
5fc: e0813103 add r3, r1, r3, lsl #2
600: e2833601 add r3, r3, #1048576 ; 0x100000
604: e152000c cmp r2, ip
608: e5830000 str r0, [r3]
60c: 1a000179 bne 5ec <mymain+0x90>

For the C code :

for (unsigned i=0; i<100000000; i++) {
j = (j + 1111);
shared->buffers[j&0x3FFFFF] = 250;
}

Pretty slick! The optimization I like best is 1111*100000000 % 0x100000000 is a unique number

.global test1111
test1111:
mov r12,r2
mov r2,#0
dotop1:
add r2, r2, #0x450
add r2, r2, #0x7
bic r3, r2, #0xff000000
bic r3, r3, #0xc00000
add r3, r1, r3, lsl #2
add r3, r3, #0x100000
cmp r2, r12
str r0, [r3]
bne dotop1

mov pc,lr




.global test1112
test1112:
mov r12,r2
mov r2,#0
dotop2:
add r2, r2, #0x0
add r2, r2, #0x7
bic r3, r2, #0xff000000
bic r3, r3, #0xc00000
add r3, r1, r3, lsl #2
add r3, r3, #0x100000
cmp r2, r12
str r0, [r3]
bne dotop2

mov pc,lr


void dotest1 ( void )
{
start=dtime();
ra=test1111(250,0x1000000,0xDE137700);
stop=dtime();
printf("%lu\n",stop-start);
}


void dotest2 ( void )
{
start=dtime();
ra=test1112(250,0x1000000,0x29B92700);
stop=dtime();
printf("%lu\n",stop-start);
}


192407707 j=j+1111
40844163 j=j+7

192408106 j=j+1111
40844155 j=j+7


With the write buffer turned off:

192406141
74096346

192408619
74096332

So you can see the write buffer working properly for j=j+7, but not for j=j+1111, that implies that each write is affected.

9 instructions in the loop, 9 clock cycles plus the pipe flush cost, plus the write to the write buffer, and if the write takes longer you have to wait for it too. For the j=j+7 case it averages out to 55ns or 11 instructions. the write buffer appears to be doing its job. Turn it off and 100ns or 20 clock cycles. Now you have to wait 9 more clock cycles for the write to finish, makes it look like a write to memory is 10 clock cycles total which is consistent with my first tests.

Now the j=j+1111 case, 261ns or 52 clocks, and this is independent of write buffer. which is an important fact. Its like it is synchronizing to something.

I am sure you already tried this but commenting out just the str, results in the same numbers for both 1111 and 7, implying that the rest of those instructions are not the source of the problem.

I wonder if it is an SDRAMism. Anyone opened their gp2x and noted the SDRAM part numbers? From a very very high level SDRAM is a way to have memory that interfaces like SRAM but uses cheaper DRAM (which requires refresh cycles, to not lose memory). I dont know enough about it though (although I have friends who do). This seems excessive though, gotta be something simpler.
 
Last edited by a moderator:
Gents, if the area is uncached and unbuffered, but the MMU is being used (you're on the 920, so it is) then it's just the TLB in the MMU causing the slowdown.

For those without low-level CPU knowledge, it maps between physical and virtual memory addresses and acts as a sort of memory-mapping cache. If you read/write from a block of memory not in the TLB then it must pick up a new page translation from main memory; this takes a couple of read cycles and is handled by the MMU hardware.

In the worst case if you read/write to memory each 4kb (the ARM's usual page size) then the TLB is going to be walking the page tables for each access and you're not getting any benefit of it's cache-like nature.

Yes but it's not applicable here : burst mode is for sequential access (cache line flush, stm/ldm, but not individual str/ldr, whatever the addresses are sequential or not - or so I read somewhere).
A quick google says you're correct: the ARM9 does not seem to coalesce in the write buffer (unlike the XScales, which do so). Obviously you can peruse the technical manuals for the ARM9 to get a definitive answer.

HTH.
 
I unoptimized the loop a little to make it easier to tweak

comparing different numbers for N


j=j+0x457 vs 0x007:
1571169
482009
1571105
482019

0x457 vs 0x357
1571130
686062
1571140
686010

0x457 vs 0x257
1571124
1609707
1571189
1609794

0x457 vs 0x157
1571163
1297277
1571098
1297527

0x457 vs 0x057
1571335
482904
1571254
482867
 
refractor posted on Sep 24 2006 at 02:08 AM said:
Gents, if the area is uncached and unbuffered, but the MMU is being used (you're on the 920, so it is) then it's just the TLB in the MMU causing the slowdown.

For those without low-level CPU knowledge, it maps between physical and virtual memory addresses and acts as a sort of memory-mapping cache. If you read/write from a block of memory not in the TLB then it must pick up a new page translation from main memory; this takes a couple of read cycles and is handled by the MMU hardware.

In the worst case if you read/write to memory each 4kb (the ARM's usual page size) then the TLB is going to be walking the page tables for each access and you're not getting any benefit of it's cache-like nature.

Yes but it's not applicable here : burst mode is for sequential access (cache line flush, stm/ldm, but not individual str/ldr, whatever the addresses are sequential or not - or so I read somewhere).
A quick google says you're correct: the ARM9 does not seem to coalesce in the write buffer (unlike the XScales, which do so). Obviously you can peruse the technical manuals for the ARM9 to get a definitive answer.

HTH.

Well, my tests fall into that category, rixed is on the 940, so that doesnt apply.

Hmmm, interesting, you cannot flush the write buffer before doing this read as this read is required to write, so the read occurs, I assume that is 10 clocks, then the write can occur, 10 clocks, all of this is in parallel to the loop yes? The longest pole in the tent would be this 20 clocks to do a write, and the write buffer would make a difference to performance. With it off you have about 10 clocks of execution time, plus 10 for the tlb read plus 10 for the write, 30 clocks. With the write buffer on 10 clocks of tlb read, 10 clocks of ram write and the execution is a freebie, 20 clocks. I am seeing 52 clocks write buffer on or off. And when different smaller values are used (0x357, 0x257, 0x157) I see various loop times, not necessarily decreasing or collapsing toward the ideal 11 clocks per loop.

I had not thought about this, thanks for the reminder, I had this MMU thing as a difference between rixed and I, but forgot where to apply it. Just as a refresher, if you read/write outside the current page table entry does that mean it reads the next one and next one (or previous) sequentially until it finds an entry? If these were 4K page tables, 1111 words is more than 4kb, so you would be jumping to the next or two away depending on where you were in the page the previous time, so one or two word reads...Or is everything fixed and rigid and part of the virtual address is used as an index into the mmu table?

Now I was going to write one more comment before going into meltdown not understanding how I got a fixed average time for these loops. What is the reason why we see a loop like this:


mov r0,#0x10000
mov r1,#0x20000
mov r2,#250
top:
str r2,[r1]
subs r0,r0,#1
bne top

On an embedded system like this with interrupts turned off, etc this should execute in the same number of timer ticks +/- 1.

Is the answer to this question at all related to the original timing question for this thread?
 
Last edited by a moderator:
OK, I had some time this morning so I build a simple programm to show what Im talking about.

First, download this :

ftp://happyleptic.org/temp/bench940.tgz

Then untar and build (tweaking the makefile may be necessary).
This is for Unix, but as everybody here sound like smart people you should all be running Unix don't you ? :)

So, when the build is done you have two files : load940 which is an elf, and bench940 which is "raw".
Send them both to the gp2x, and telnet to there.
Then, run :

./load940 bench940 0 wait

This will upload bench940 to the ARM940, run it, and show how much time it required to complete.

Then, edit bench940.S and define N to be, say, 1 instead of 1024.
Build, and run again : you will notice it runs many times faster. The question is : why is it slower to write words 1024 words apart ?

bench940.S setup the protection unit to have ICache but no DCache, and no write buffer. It also set the ARM940 to run at 200MHz.

Another interresting things : it loops only 1M times, because if you let it run 100M times like I did before, Linux hangs for N > 163 (at least in my unit). This surprises me a lot, I suppose this is a question of RAM access arbitration between the two CPU, but still its very mysterious to me.

So now, we have apples and apples :)

Ideas, people ?
 
rixed posted on Sep 24 2006 at 12:18 PM said:
OK, I had some time this morning so I build a simple programm to show what Im talking about.

First, download this :

ftp://happyleptic.org/temp/bench940.tgz

Then untar and build (tweaking the makefile may be necessary).
This is for Unix, but as everybody here sound like smart people you should all be running Unix don't you ? :)

So, when the build is done you have two files : load940 which is an elf, and bench940 which is "raw".
Send them both to the gp2x, and telnet to there.
Then, run :

./load940 bench940 0 wait

This will upload bench940 to the ARM940, run it, and show how much time it required to complete.

Then, edit bench940.S and define N to be, say, 1 instead of 1024.
Build, and run again : you will notice it runs many times faster. The question is : why is it slower to write words 1024 words apart ?

bench940.S setup the protection unit to have ICache but no DCache, and no write buffer. It also set the ARM940 to run at 200MHz.

Another interresting things : it loops only 1M times, because if you let it run 100M times like I did before, Linux hangs for N > 163 (at least in my unit). This surprises me a lot, I suppose this is a question of RAM access arbitration between the two CPU, but still its very mysterious to me.

So now, we have apples and apples :)

Ideas, people ?

I dont have telnet, etc running. Used a simpler loader, that dumped into an infinite loop when finished (in cache). Added the timer to your code to time just the test, added uart code to dump results out the uart (octal was fast and easy). I moved the test memory to avoid video memory, dont want hardware conflicts to could the numbers.

I turned on and off dcache and the write buffer. Noticeable difference. The problem appears to be linear, the difference is between N = 2 and N = 4, BUT there is a relationship with the problem and the mask. So that means it isnt necessarily writes 1024 words apart, but some boundary crossing. And this data continues to support that as increasing the N, to 8, 16, 32, etc up to 1024 gradually increases the execution time, the larger the N the sooner and more often that boundary is crossed.

I gotta go cant work on this more tonight, hopefully more on this tomorrow...

0:
str r3, [r1, r2] @// write a word
add r2, r2, r4, LSL #2 @// increment offset by N words
add r3, r3, #0x1; @// to see when we write twice into the frame buffer
and r2, r2, r5 @// make offset loop after 16Mb
subs r0, r0, #0x1 @// dec counter
bne 0b


N = 1
2206374 no dcache no wb
2206457 dcache
1107021 dcache+wb
1106707 wb, no dcache


N = 1024

7220227
7220102 dcache
7220442 dcache+wb
7220674 wb, no dcache

N = 512

7255236 no wb
7255244 wb

N = 256 3526323 wb

N = 128 1662510

N = 64 1121736

N = 32 1115615

N = 16 1111520

N = 8 1112515

N = 4 1110375

N = 2 1107564

N = 1 1106542

N = 0 1107022


N = 0 write through not write back

2206431

N = 4 mask changed to 0xFF

1106613


Basically I was able to see the problem on the 920 last night as well. But had not tried narrowing it down like this to see that there is a curve related to N.
 
Last edited by a moderator:
Back
Top