notaz
Certified Guru
I've just ran some tests on this, and it looks like this might be indeed one of major bottlenecks. When I write words and post-increment address by 1kB, my test runs more than 8 (!) times slower than with post-increment of 32 bytes or less. It seems write buffer won't even work then, there is no difference if it's enabled or not. I thought this could have something to do with TLB cache misses so I set up single 1MB section in MMU, ran the test there and got mostly the same results. I can stuff some nops between those accesses without affecting measurements, so it must be memory latency issue.Exophase said:Maybe I should see what the costs are when you cross memory rows. On GP2X the initial configuration was very poor, hence why the improved RAM timings helped things. Normally you will be bottlenecked by the column addressing and burst read/write, not the row changing time, so it would have to be pretty significant to affect things this much.
Nah, there is nothing like that in documentation.Exophase said:One nice thing about later ARM CPUs is that they have performance monitoring counters that you can use to help better profile where cycles are going. I don't know, maybe this one has them and I just missed it, but I don't think so.
EDIT: Well, after increasing buffer size I can see TLB misses hit pretty hard, as expected. Although hitting different page on every access is extremely unlikely, that scenario slows things down at least 5 times compared to exact same code with no TLB misses (which can be achieved thanks to 1MB section setting in MMU).
Last edited by a moderator: