Caching And Array Value Copying Slowness


The 920T write buffer, 16-word data, 4-address address. can be drained under software control or by reading memory (not cache).

When you do a simple store

strb r2,[r0] <---
add r0,r3,r4
sub r2,r1,r3
...

And the write buffer is enabled and the code is in the cache so that prefetching does not cause memory reads, and the write buffer has room, then

If the address r0 is in the data cache you will get a write to the cache. (I think you have to have the cache enabled to get the write buffer, I would check that).

One of the four addresses in the write buffer gets the value in r0, one of the 16 words (or at least one of the bytes of the 16 words) gets the value in r2.

Execution continues to the add instruction, while execution continues in the core, the write buffer takes its sweet time writing that r2 byte to the address that had been in r0, destroying r0 in following instructions
doest matter because the write buffer has its own copy.

In general what the write buffer does is allow execution to continue in the hopes that there will on average be code executed from cache that keeps the memory bus idle so that the the whole system is doing two things at once, instead of only one thing at a time. You can easily see this by enabling and disabling the write buffer and watching your overall performance. It does help.

Now the write buffer is full when either four addresses (for instructions) or 16 words (anywhere from 1 to 4 instructions, the four addresses are likely to fill up first) are used up, before they can get drained. It is like a fifo, first in first out, as soon as one thing dumps out the bottom you can add more things in the top.

if the write buffer is full when your code needs to store, then execution has to wait until the memory interface can move some of that data out to the write buffer.

A byte write

strb r2,[r0] will consume one address and one data element

The same is true for a halfword or word write

strh or str r2,[r0]

So after four of these in a row (assuming you start with an empty write buffer) the write buffer is full and a fifth one has to wait. Already we can take advantage of this though:

Say for example, an unrolled loop:
a=b; i++;
a=b; i++;
a=b; i++;
a=b; i++;
...

if a and b are unsigned char you may get:

ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data

address part of write buffer is full, we have to wait on sram. You stored up 4 bytes before you had to wait

If these were words instead of bytes though

ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address one data

and you have to wait, but now you have 16 bytes in the write buffer, not 4 and you did it in a fraction of the number of instructions, if all you wanted to do was copy four bytes not 16 then a single store would have done it, if the code following the copy did not need to access ram for whatever reason, you save several instructions and the write buffer works in parallel to push that word/four bytes out.

Now lets take that to its extreme:


LDMIA r6,{r7,r8,r9,r10}
STMIA r1,{r2,r3,r4,r5}

The one STM uses only one address and four word locations, you can string four of these in a row before
having to wait on the write buffer, that is perfectly full, four addresses and 64 bytes.

So on this architecture it is to your advantage to align your data on word boundaries (a number of ways to do this, careful not to fall into other compiler traps though). And either the compiler will already take advantage of it, or you can force it with asm routines to use larger writes.

You cannot really control when cache lines are going to get filled (everything stops because a prefetch caused 8 reads from ram to occur). And this cache line fill itself is held off until the write buffer drains. but on average you will get this parallel operation gain.

There is one very very slight risk with a write buffer. and it gets worse the larger the data elements you use. If finish a memcpy that uses four word STMIAs then immediately write a register in a peripheral that says my data is ready for you to use, the peripheral will immediately start to read that memory, the peripheral is not in the path of your cache and write buffer, so if the peripheral for some reason starts at the tail end of your data instead of the beginning, it could get a read or two in before the write buffer makes it there to put the fresh data in, and the peripheral gets stale data (whatever was there before). So you have to have an awareness for how you cache and write buffer enable peripheral mapped memory. Or know exactly how the peripheral works. You dont really need to go to the extreme of using the software controlled drain function, a simple read from non-cached memory will do:

The problem code:

memcpy(vidobuffer,mybuffer,320*240*2);
page_flip();

The fix

unsigned long *nocache=(unsigned long *)0x12345678; //some non cached memory address
...

memcpy(vidobuffer,mybuffer,320*240*2);
x=nocache[0];
page_flip();

Now you can data cache and write buffer enable the peripherals memory space, but also not worry about the peripheral getting stale data.

Anyway, thats how write buffers work AFAIK in a nutshell.

How does it help or hurt you? It doesnt hurt unless you have this peripheral or dual core sharing issue and the time delay for the write buffer to finish. I would assume that the peripheral or the other core is going to start from the same end of memory you started and work in the same direction, so if you started at 0x1000000 and ended at 0x1000100 when you tell the peripheral or other core that the data is ready, 0x10000F8 through 0x1000100 may not actually be there yet, but if the peripheral or other core starts at 0x1000000 and works toward 0x1000100 it cannot read any faster than your write buffer is writing, write buffer writes 0x10000F8, peripheral reads 0x1000000, write buffer ..FE, peripheral ...04, write buffer is done peripheral ...08, no conflict. At best they will ping pong, you write one I read one you write on I read one.
Unless of course a priority based bus arbiter is in the way and that may give the peripheral access first. so just try to use a larger data to address ratio, if you get goobers in your data, then isolate where you need to flush the write buffer and/or turn it off.

If I am wasting time and bandwith with this crap then just tell me, I will crawl back into the dark hole from which I came...
 
That was fun, the mmu has an alignment fault enable

Post bootloader (I dont run linux I run HH with no OS), and this was the ART103 bootloader,

the mmu control register was 0xC000007A which means the alignment fault was armed, and sure enough if you do an unaligned access it reboots (not sure what the fault code wanted it to do, but it..rebooted).

I cleared that control bit and now it works as I had expected:

Before:

0x50000 0x11
0x50001 0x22
0x50002 0x33
0x50003 0x44
0x50004 0x55
0x50005 0x66
0x50006 0x77
0x50007 0x88
0x50008 0x99
0x50009 0xAA
0x5000A 0xBB
0x5000B 0xCC
0x5000C 0xDD
0x5000D 0xE0

After this:

ldr r0,=0x50000
ldr r1,=0x50005

ldr r2,[r0]
str r2,[r1]

0x50000 0x11
0x50001 0x22
0x50002 0x33
0x50003 0x44
0x50004 0x11
0x50005 0x22
0x50006 0x33
0x50007 0x44
0x50008 0x99
0x50009 0xAA
0x5000A 0xBB
0x5000B 0xCC
0x5000C 0xDD
0x5000D 0xE0

The load is aligned, I was wrong in the earlier post perhaps, the load rotates and the
store doesnt (why did the store give me an alignment fault if they ignore the lower
bits?). And that is what we see here. The load is aligned so you get

0x44332211 write that to 0x50005 but actually it writes to 0x50004 because it trims
address 1 downto 0 off.


But if the load is not aligned:

ldr r0,=0x50001
ldr r1,=0x50008

ldr r2,[r0]
str r2,[r1]

0x50000 0x11
0x50001 0x22
0x50002 0x33
0x50003 0x44
0x50004 0x55
0x50005 0x66
0x50006 0x77
0x50007 0x88
0x50008 0x22
0x50009 0x33
0x5000A 0x44
0x5000B 0x11
0x5000C 0xDD
0x5000D 0xE0

r2 will get 0x11443322

On a PC the register would have read 0x55443322 and that takes two memory cycles you have to read
from address 0x50000 take three of the bytes (lanes), the 0x22, 0x33, 0x44, move them over a lane
so that they land in the lower three bytes of the register. Then you have to do a read from
0x50004, only keep the lower lane and route it over to the ms lane, to complete the register.

On the ARM (pre ARMv6 apparently). It will only do one read from 0x50000, thats it and it will rotate
the lanes around based on the lower bits, so 0x50001 rotates one lane, 0x50002 is two lanes,
0x50003 is three lanes and 0x50000 is no rotation, or a normal aligned read. So yes that means
if you want to have a neat trick for reading big endian halfwords, just read at address+1 the
memory interface will swap the bytes for you. Want to swap halfwords in a word, read at
aligned_address+2;

And when you store it strips the lower address bits so a store to 0x50005, 0x50006, 0x50007 lose their
lower bits and actually do an aligned word store to 0x50004.

I am curious to know if linux disables the alignment fault. (but not curious enough to find the tools and write a linux program for this).
 
This discution turn out to be Realy interesting, and,
therefore, I'm asking myself if linux slow us down
a lot, because of task switching.... that would trash
cache line....

I'll remain silent, as I know little of ARM asm and code only
little on Gp2x since I don't have any time :(

Thank Dwelch, I always wanted to have a crash course of cashe
+ write buffer and everything about optimization on ARM
 
paxl13 posted on Sep 20 2006 at 11:01 PM said:
This discution turn out to be Realy interesting, and,
therefore, I'm asking myself if linux slow us down
a lot, because of task switching.... that would trash
cache line....

I'll remain silent, as I know little of ARM asm and code only
little on Gp2x since I don't have any time :(

Thank Dwelch, I always wanted to have a crash course of cashe
+ write buffer and everything about optimization on ARM


I agree! Some of this is still a little over my head right now. Need to re-read a lot of it I think. Very interesting stuff though. We have some real experts on these forums!
 
Last edited by a moderator:
dwelch posted on Sep 16 2006 at 07:47 PM said:
The 920T write buffer, 16-word data, 4-address address. can be drained under software control or by reading memory (not cache).

When you do a simple store

strb r2,[r0] <---
add r0,r3,r4
sub r2,r1,r3
...

And the write buffer is enabled and the code is in the cache so that prefetching does not cause memory reads, and the write buffer has room, then

If the address r0 is in the data cache you will get a write to the cache. (I think you have to have the cache enabled to get the write buffer, I would check that).

One of the four addresses in the write buffer gets the value in r0, one of the 16 words (or at least one of the bytes of the 16 words) gets the value in r2.

Execution continues to the add instruction, while execution continues in the core, the write buffer takes its sweet time writing that r2 byte to the address that had been in r0, destroying r0 in following instructions
doest matter because the write buffer has its own copy.

In general what the write buffer does is allow execution to continue in the hopes that there will on average be code executed from cache that keeps the memory bus idle so that the the whole system is doing two things at once, instead of only one thing at a time. You can easily see this by enabling and disabling the write buffer and watching your overall performance. It does help.

Now the write buffer is full when either four addresses (for instructions) or 16 words (anywhere from 1 to 4 instructions, the four addresses are likely to fill up first) are used up, before they can get drained. It is like a fifo, first in first out, as soon as one thing dumps out the bottom you can add more things in the top.

if the write buffer is full when your code needs to store, then execution has to wait until the memory interface can move some of that data out to the write buffer.

A byte write

strb r2,[r0] will consume one address and one data element

The same is true for a halfword or word write

strh or str r2,[r0]

So after four of these in a row (assuming you start with an empty write buffer) the write buffer is full and a fifth one has to wait. Already we can take advantage of this though:

Say for example, an unrolled loop:
a=b; i++;
a=b; i++;
a=b; i++;
a=b; i++;
...

if a and b are unsigned char you may get:

ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data

address part of write buffer is full, we have to wait on sram. You stored up 4 bytes before you had to wait

If these were words instead of bytes though

ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address one data

and you have to wait, but now you have 16 bytes in the write buffer, not 4 and you did it in a fraction of the number of instructions, if all you wanted to do was copy four bytes not 16 then a single store would have done it, if the code following the copy did not need to access ram for whatever reason, you save several instructions and the write buffer works in parallel to push that word/four bytes out.

Now lets take that to its extreme:


LDMIA r6,{r7,r8,r9,r10}
STMIA r1,{r2,r3,r4,r5}

The one STM uses only one address and four word locations, you can string four of these in a row before
having to wait on the write buffer, that is perfectly full, four addresses and 64 bytes.

So on this architecture it is to your advantage to align your data on word boundaries (a number of ways to do this, careful not to fall into other compiler traps though). And either the compiler will already take advantage of it, or you can force it with asm routines to use larger writes.

You cannot really control when cache lines are going to get filled (everything stops because a prefetch caused 8 reads from ram to occur). And this cache line fill itself is held off until the write buffer drains. but on average you will get this parallel operation gain.

There is one very very slight risk with a write buffer. and it gets worse the larger the data elements you use. If finish a memcpy that uses four word STMIAs then immediately write a register in a peripheral that says my data is ready for you to use, the peripheral will immediately start to read that memory, the peripheral is not in the path of your cache and write buffer, so if the peripheral for some reason starts at the tail end of your data instead of the beginning, it could get a read or two in before the write buffer makes it there to put the fresh data in, and the peripheral gets stale data (whatever was there before). So you have to have an awareness for how you cache and write buffer enable peripheral mapped memory. Or know exactly how the peripheral works. You dont really need to go to the extreme of using the software controlled drain function, a simple read from non-cached memory will do:

The problem code:

memcpy(vidobuffer,mybuffer,320*240*2);
page_flip();

The fix

unsigned long *nocache=(unsigned long *)0x12345678; //some non cached memory address
...

memcpy(vidobuffer,mybuffer,320*240*2);
x=nocache[0];
page_flip();

Now you can data cache and write buffer enable the peripherals memory space, but also not worry about the peripheral getting stale data.

Anyway, thats how write buffers work AFAIK in a nutshell.

How does it help or hurt you? It doesnt hurt unless you have this peripheral or dual core sharing issue and the time delay for the write buffer to finish. I would assume that the peripheral or the other core is going to start from the same end of memory you started and work in the same direction, so if you started at 0x1000000 and ended at 0x1000100 when you tell the peripheral or other core that the data is ready, 0x10000F8 through 0x1000100 may not actually be there yet, but if the peripheral or other core starts at 0x1000000 and works toward 0x1000100 it cannot read any faster than your write buffer is writing, write buffer writes 0x10000F8, peripheral reads 0x1000000, write buffer ..FE, peripheral ...04, write buffer is done peripheral ...08, no conflict. At best they will ping pong, you write one I read one you write on I read one.
Unless of course a priority based bus arbiter is in the way and that may give the peripheral access first. so just try to use a larger data to address ratio, if you get goobers in your data, then isolate where you need to flush the write buffer and/or turn it off.

If I am wasting time and bandwith with this crap then just tell me, I will crawl back into the dark hole from which I came...


Excuse me but that's a bit too advanced to me. Here is a graph of the time it takes for a single rotation based on the angle of rotation :

35ch24n.gif


As you can see reading stuff in order, no matter whether it's in the normal order or backwards, is pretty quick (about 33 ms) as reading stuff out of order, at worst jumping of 2040 bytes (in this precise case) between each read takes 2.5 times longer.

Would making the array I read from uncached make everything go about as fast (which is important since the speed of the slowest rotation is the maximum speed I can allow myself) and how could I do it? Sorry about trying to dumb things down but things must be made simple enough for me to understand and to be able to implement, unless you want to edit my code yourself :)
 
Last edited by a moderator:
A_SN posted on Sep 24 2006 at 07:42 AM said:
As you can see reading stuff in order, no matter whether it's in the normal order or backwards, is pretty quick (about 33 ms) as reading stuff out of order, at worst jumping of 2040 bytes (in this precise case) between each read takes 2.5 times longer.
I don't suppose you could keep a second copy of the source image rotated 90 degrees... Have you tried swapping the order so that reads are sequencial and writes are 2040 bytes out for the worse angles? Another thought is to rotate by 90 degrees less and then have the hardware do the last bit, but I think that would be slower.

You're probably seeing something related to http://www.gp32x.de/board/index.php?showtopic=32088 , so you'll still see differences without caching.
 
Last edited by a moderator:
rabidcow posted on Sep 25 2006 at 08:20 PM said:
A_SN posted on Sep 24 2006 at 07:42 AM said:
As you can see reading stuff in order, no matter whether it's in the normal order or backwards, is pretty quick (about 33 ms) as reading stuff out of order, at worst jumping of 2040 bytes (in this precise case) between each read takes 2.5 times longer.
I don't suppose you could keep a second copy of the source image rotated 90 degrees... Have you tried swapping the order so that reads are sequencial and writes are 2040 bytes out for the worse angles? Another thought is to rotate by 90 degrees less and then have the hardware do the last bit, but I think that would be slower.

You're probably seeing something related to http://www.gp32x.de/board/index.php?showtopic=32088 , so you'll still see differences without caching.

Oh yeah actually keeping a rotated copy sounds like a great idea, mostly that the slowest rotation this time would be as slow as 45°, and yet I could pre-calculate the rotation for 45° too, but it's all about whether or not I can take 4 times as much memory. I wonder why I hadn't thought about it.

anyways, I had already seen that thread you linked to, however when they get to such low-level technical details it's like chinese to me. so if anyone who understands anything to all that wants to try editing my code, i'll be glad to benchmark it. but yeah besides that from the first post on this thread it seems that there still will be differences, however it's not very clear to me whether it was actually uncached and unbuffered or not.

by the way, right now i'm trying to make it differently by reading always in order and writing sometimes out of order, do you think it will go any fast in spite of the out of order writing?
 
Last edited by a moderator:
Back
Top