GP2X Benchmarking Ram Writes


dwelch posted on Sep 25 2006 at 09:29 AM said:
I turned on and off dcache and the write buffer. Noticeable difference. The problem appears to be linear, the difference is between N = 2 and N = 4, BUT there is a relationship with the problem and the mask. So that means it isnt necessarily writes 1024 words apart, but some boundary crossing. And this data continues to support that as increasing the N, to 8, 16, 32, etc up to 1024 gradually increases the execution time, the larger the N the sooner and more often that boundary is crossed.

Yes I have the same feeling that we are experiencing crossing a RAM boundary. Something like when you access another bank of RAM some signals "out there" need to be changed or resynchronized, or who knows what.

Im going to play with that base address and the mask to try to find out this "bank" sizes and locations. It's going to be quite small, I fear, because for N=150 we are almost at the maximum latency.
 
Last edited by a moderator:
rixed posted on Sep 25 2006 at 03:43 AM said:
Im going to play with that base address and the mask to try to find out this "bank" sizes and locations. It's going to be quite small, I fear, because for N=150 we are almost at the maximum latency.

With an address mask of 0x7FF, full speed, 0xFFF it starts to slow down, so 2048 is the magic number.

On the 940 side it does have that mmu tlb smell to it, but there is no mmu here.

I am getting a loop execution time of 40.4675ns for one normal speed looking test and 40.60384 for just into the slowdown curve. Not even one clock cycle (5ns). So that means this slow down is not happening every time (well DUH) I guess I knew that. I have N set to 4 here so the 0x800 address bit toggles how often. Hmmm,

0:
str r3, [r1, r2] @// write a word
add r2, r2, r4, LSL #2 @// increment offset by N words
add r3, r3, #0x1; @// to see when we write twice into the frame buffer
and r2, r2, r5 @// make offset loop after 16Mb
subs r0, r0, #0x1 @// dec counter
bne 0b

r2=r2+(4<<2) = r2+16. 2048/16 = 128.

One out of every 128 cycles will toggle this magic address boundary (making the assumption that every time across the boundary causes something extra to occur).

5178.56
5197.29

So between three and four extra clock cycles for every 128 times through the loop.


Changing to N=8 5226.42 for every 128 cycles which is between 9 and 10 clock cycles extra
5214.67 7 clocks extra for a different N=8 run.


33398.40

5000+ extra clock cycles if N = 512. The pattern is not linear...

For various values of N, thus far, a mask of 0x7FF will bring you back to the baseline speed.

N = 512 is the sweet spot, as it toggles that 0x800 address bit every cycle. And my prior numbers agree with that. Note those numbers are in octal.
 
Last edited by a moderator:
dwelch posted on Sep 24 2006 at 03:35 PM said:
Well, my tests fall into that category, rixed is on the 940, so that doesnt apply.

8< snip 8<

I had not thought about this, thanks for the reminder, I had this MMU thing as a difference between rixed and I, but forgot where to apply it. Just as a refresher, if you read/write outside the current page table entry does that mean it reads the next one and next one (or previous) sequentially until it finds an entry? If these were 4K page tables, 1111 words is more than 4kb, so you would be jumping to the next or two away depending on where you were in the page the previous time, so one or two word reads...Or is everything fixed and rigid and part of the virtual address is used as an index into the mmu table?
The TLB has a cache of 64 translations between virtual and physical addresses (each a 4kb page in this case). If you access something that's not covered by one of those entries the MMU must walk the page table (which the OS has given it access to) to find the translation to the physical address. The virtual address is split into three offsets: L1, L2, and the offset within the page (12 bits, obviously). The L1 picks up an entry from the master page table, and that may be used to pick up a page from the "coarse" L2 page table (if necessary). The result is then stuffed into the TLB.
 
Last edited by a moderator:
refractor posted on Sep 26 2006 at 01:42 AM said:
dwelch posted on Sep 24 2006 at 03:35 PM said:
Well, my tests fall into that category, rixed is on the 940, so that doesnt apply.

8< snip 8<

I had not thought about this, thanks for the reminder, I had this MMU thing as a difference between rixed and I, but forgot where to apply it. Just as a refresher, if you read/write outside the current page table entry does that mean it reads the next one and next one (or previous) sequentially until it finds an entry? If these were 4K page tables, 1111 words is more than 4kb, so you would be jumping to the next or two away depending on where you were in the page the previous time, so one or two word reads...Or is everything fixed and rigid and part of the virtual address is used as an index into the mmu table?
The TLB has a cache of 64 translations between virtual and physical addresses (each a 4kb page in this case). If you access something that's not covered by one of those entries the MMU must walk the page table (which the OS has given it access to) to find the translation to the physical address. The virtual address is split into three offsets: L1, L2, and the offset within the page (12 bits, obviously). The L1 picks up an entry from the master page table, and that may be used to pick up a page from the "coarse" L2 page table (if necessary). The result is then stuffed into the TLB.

The 940 does not have an MMU, unfortunately. And there is only one whatsit setup in the 940s MPU, whatsit number 0 covers the full 4GB address space. And the MPU is turned on (otherwise we would not have the advantage of using the cache and write buffer for this test to isolate write times).
 
Last edited by a moderator:
We found quite the same numbers.
Here are my results :

notice: all this is with the code given above, running on the 940 with icache on, dcache off and write buffer off.
I changed the starting address to 2Mb to get out of video buffer, and so removed the instruction that incremented the written value, reducing the loop to 5 instructions.

I also used an address mask for 1 Mb wrap-around.

I measured code execution time for various N ranging from 0 to 4000, and plotted the result :

write_speed.png


First, one can see that my timing function is not very precise, but that's enough to clearly spot the strange behavior.

Second, for N < approx 200, ( N is in word, that is 800 bytes), we have the expected flat pattern : all writes take the same amount of time (very approximately 70ns, that is 10 times more than the other instructions). This is not bad considering data caches are off.

Third : From there the writes became slower and slower, untill N= approx 512 where they are approximately 3 times slower than "normal".

Fourth : This is a maximum : if N keeps growing the writes won't go slower.

This suggest that there is a ram boundary at about 512 words (2Kb).

I then played with the address mask to prove that.

With a mask of 0x7FF (2Kb), all is fast whatever N.
Surprisingly, the same goes for a mask of 0xFFF (4Kb) and 0x1FFF (8Kb) !
Only for 0x3FFF (16Kb) do we retrieve the expected result of the writes being slow for N>=512.

This suggest that this is not merely a boundary crossing problem, but much more like if there was 4 banks of 2Kb that were accessibles simultaneously, and that addressing another bank require additionnal time.
Like an L2 cache, or something similar ?

Conclusion for today : don't write randomly on more than 8 Kilobytes, or your writes will became three times slower.
 
With cpu_speed RAM settings given in http://www.gp32x.de/board/index.php?showtopic=32319
the figures change drastically :

- for N=1, no change
- for N=1000, total time <= 80000 usec instead of about 250000.
- for N=10000, same.

So, for 'random' writes, RAM access is more than 3 times faster. No noticeable change for close RAM accesses.

Very nice, but still no clue on what's causing this behavior.
 
Last edited by a moderator:
Back
Top