The 920T write buffer, 16-word data, 4-address address. can be drained under software control or by reading memory (not cache).
When you do a simple store
strb r2,[r0] <---
add r0,r3,r4
sub r2,r1,r3
...
And the write buffer is enabled and the code is in the cache so that prefetching does not cause memory reads, and the write buffer has room, then
If the address r0 is in the data cache you will get a write to the cache. (I think you have to have the cache enabled to get the write buffer, I would check that).
One of the four addresses in the write buffer gets the value in r0, one of the 16 words (or at least one of the bytes of the 16 words) gets the value in r2.
Execution continues to the add instruction, while execution continues in the core, the write buffer takes its sweet time writing that r2 byte to the address that had been in r0, destroying r0 in following instructions
doest matter because the write buffer has its own copy.
In general what the write buffer does is allow execution to continue in the hopes that there will on average be code executed from cache that keeps the memory bus idle so that the the whole system is doing two things at once, instead of only one thing at a time. You can easily see this by enabling and disabling the write buffer and watching your overall performance. It does help.
Now the write buffer is full when either four addresses (for instructions) or 16 words (anywhere from 1 to 4 instructions, the four addresses are likely to fill up first) are used up, before they can get drained. It is like a fifo, first in first out, as soon as one thing dumps out the bottom you can add more things in the top.
if the write buffer is full when your code needs to store, then execution has to wait until the memory interface can move some of that data out to the write buffer.
A byte write
strb r2,[r0] will consume one address and one data element
The same is true for a halfword or word write
strh or str r2,[r0]
So after four of these in a row (assuming you start with an empty write buffer) the write buffer is full and a fifth one has to wait. Already we can take advantage of this though:
Say for example, an unrolled loop:
a=b; i++;
a=b; i++;
a=b; i++;
a=b; i++;
...
if a and b are unsigned char you may get:
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
address part of write buffer is full, we have to wait on sram. You stored up 4 bytes before you had to wait
If these were words instead of bytes though
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address one data
and you have to wait, but now you have 16 bytes in the write buffer, not 4 and you did it in a fraction of the number of instructions, if all you wanted to do was copy four bytes not 16 then a single store would have done it, if the code following the copy did not need to access ram for whatever reason, you save several instructions and the write buffer works in parallel to push that word/four bytes out.
Now lets take that to its extreme:
LDMIA r6,{r7,r8,r9,r10}
STMIA r1,{r2,r3,r4,r5}
The one STM uses only one address and four word locations, you can string four of these in a row before
having to wait on the write buffer, that is perfectly full, four addresses and 64 bytes.
So on this architecture it is to your advantage to align your data on word boundaries (a number of ways to do this, careful not to fall into other compiler traps though). And either the compiler will already take advantage of it, or you can force it with asm routines to use larger writes.
You cannot really control when cache lines are going to get filled (everything stops because a prefetch caused 8 reads from ram to occur). And this cache line fill itself is held off until the write buffer drains. but on average you will get this parallel operation gain.
There is one very very slight risk with a write buffer. and it gets worse the larger the data elements you use. If finish a memcpy that uses four word STMIAs then immediately write a register in a peripheral that says my data is ready for you to use, the peripheral will immediately start to read that memory, the peripheral is not in the path of your cache and write buffer, so if the peripheral for some reason starts at the tail end of your data instead of the beginning, it could get a read or two in before the write buffer makes it there to put the fresh data in, and the peripheral gets stale data (whatever was there before). So you have to have an awareness for how you cache and write buffer enable peripheral mapped memory. Or know exactly how the peripheral works. You dont really need to go to the extreme of using the software controlled drain function, a simple read from non-cached memory will do:
The problem code:
memcpy(vidobuffer,mybuffer,320*240*2);
page_flip();
The fix
unsigned long *nocache=(unsigned long *)0x12345678; //some non cached memory address
...
memcpy(vidobuffer,mybuffer,320*240*2);
x=nocache[0];
page_flip();
Now you can data cache and write buffer enable the peripherals memory space, but also not worry about the peripheral getting stale data.
Anyway, thats how write buffers work AFAIK in a nutshell.
How does it help or hurt you? It doesnt hurt unless you have this peripheral or dual core sharing issue and the time delay for the write buffer to finish. I would assume that the peripheral or the other core is going to start from the same end of memory you started and work in the same direction, so if you started at 0x1000000 and ended at 0x1000100 when you tell the peripheral or other core that the data is ready, 0x10000F8 through 0x1000100 may not actually be there yet, but if the peripheral or other core starts at 0x1000000 and works toward 0x1000100 it cannot read any faster than your write buffer is writing, write buffer writes 0x10000F8, peripheral reads 0x1000000, write buffer ..FE, peripheral ...04, write buffer is done peripheral ...08, no conflict. At best they will ping pong, you write one I read one you write on I read one.
Unless of course a priority based bus arbiter is in the way and that may give the peripheral access first. so just try to use a larger data to address ratio, if you get goobers in your data, then isolate where you need to flush the write buffer and/or turn it off.
If I am wasting time and bandwith with this crap then just tell me, I will crawl back into the dark hole from which I came...
When you do a simple store
strb r2,[r0] <---
add r0,r3,r4
sub r2,r1,r3
...
And the write buffer is enabled and the code is in the cache so that prefetching does not cause memory reads, and the write buffer has room, then
If the address r0 is in the data cache you will get a write to the cache. (I think you have to have the cache enabled to get the write buffer, I would check that).
One of the four addresses in the write buffer gets the value in r0, one of the 16 words (or at least one of the bytes of the 16 words) gets the value in r2.
Execution continues to the add instruction, while execution continues in the core, the write buffer takes its sweet time writing that r2 byte to the address that had been in r0, destroying r0 in following instructions
doest matter because the write buffer has its own copy.
In general what the write buffer does is allow execution to continue in the hopes that there will on average be code executed from cache that keeps the memory bus idle so that the the whole system is doing two things at once, instead of only one thing at a time. You can easily see this by enabling and disabling the write buffer and watching your overall performance. It does help.
Now the write buffer is full when either four addresses (for instructions) or 16 words (anywhere from 1 to 4 instructions, the four addresses are likely to fill up first) are used up, before they can get drained. It is like a fifo, first in first out, as soon as one thing dumps out the bottom you can add more things in the top.
if the write buffer is full when your code needs to store, then execution has to wait until the memory interface can move some of that data out to the write buffer.
A byte write
strb r2,[r0] will consume one address and one data element
The same is true for a halfword or word write
strh or str r2,[r0]
So after four of these in a row (assuming you start with an empty write buffer) the write buffer is full and a fifth one has to wait. Already we can take advantage of this though:
Say for example, an unrolled loop:
a=b; i++;
a=b; i++;
a=b; i++;
a=b; i++;
...
if a and b are unsigned char you may get:
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
ldrb r2,[r0],#1
strb r2,[r1],#1 <-- one address one data
address part of write buffer is full, we have to wait on sram. You stored up 4 bytes before you had to wait
If these were words instead of bytes though
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address four data
ldr r2,[r0],#4
str r2,[r1],#4 <-- one address one data
and you have to wait, but now you have 16 bytes in the write buffer, not 4 and you did it in a fraction of the number of instructions, if all you wanted to do was copy four bytes not 16 then a single store would have done it, if the code following the copy did not need to access ram for whatever reason, you save several instructions and the write buffer works in parallel to push that word/four bytes out.
Now lets take that to its extreme:
LDMIA r6,{r7,r8,r9,r10}
STMIA r1,{r2,r3,r4,r5}
The one STM uses only one address and four word locations, you can string four of these in a row before
having to wait on the write buffer, that is perfectly full, four addresses and 64 bytes.
So on this architecture it is to your advantage to align your data on word boundaries (a number of ways to do this, careful not to fall into other compiler traps though). And either the compiler will already take advantage of it, or you can force it with asm routines to use larger writes.
You cannot really control when cache lines are going to get filled (everything stops because a prefetch caused 8 reads from ram to occur). And this cache line fill itself is held off until the write buffer drains. but on average you will get this parallel operation gain.
There is one very very slight risk with a write buffer. and it gets worse the larger the data elements you use. If finish a memcpy that uses four word STMIAs then immediately write a register in a peripheral that says my data is ready for you to use, the peripheral will immediately start to read that memory, the peripheral is not in the path of your cache and write buffer, so if the peripheral for some reason starts at the tail end of your data instead of the beginning, it could get a read or two in before the write buffer makes it there to put the fresh data in, and the peripheral gets stale data (whatever was there before). So you have to have an awareness for how you cache and write buffer enable peripheral mapped memory. Or know exactly how the peripheral works. You dont really need to go to the extreme of using the software controlled drain function, a simple read from non-cached memory will do:
The problem code:
memcpy(vidobuffer,mybuffer,320*240*2);
page_flip();
The fix
unsigned long *nocache=(unsigned long *)0x12345678; //some non cached memory address
...
memcpy(vidobuffer,mybuffer,320*240*2);
x=nocache[0];
page_flip();
Now you can data cache and write buffer enable the peripherals memory space, but also not worry about the peripheral getting stale data.
Anyway, thats how write buffers work AFAIK in a nutshell.
How does it help or hurt you? It doesnt hurt unless you have this peripheral or dual core sharing issue and the time delay for the write buffer to finish. I would assume that the peripheral or the other core is going to start from the same end of memory you started and work in the same direction, so if you started at 0x1000000 and ended at 0x1000100 when you tell the peripheral or other core that the data is ready, 0x10000F8 through 0x1000100 may not actually be there yet, but if the peripheral or other core starts at 0x1000000 and works toward 0x1000100 it cannot read any faster than your write buffer is writing, write buffer writes 0x10000F8, peripheral reads 0x1000000, write buffer ..FE, peripheral ...04, write buffer is done peripheral ...08, no conflict. At best they will ping pong, you write one I read one you write on I read one.
Unless of course a priority based bus arbiter is in the way and that may give the peripheral access first. so just try to use a larger data to address ratio, if you get goobers in your data, then isolate where you need to flush the write buffer and/or turn it off.
If I am wasting time and bandwith with this crap then just tell me, I will crawl back into the dark hole from which I came...