Mmu Hack: What Exactly Does It Do?


Laurent said:
Squidge said:
Sure it does, just recompile it :)
I am so lazy :)

Did you try to play with sections? You can map 1 MB pages which could avoid TLB trashing...
IIRC the kernel already maps 940 memory as sections, but you don't use the same virtual addresses.

It's been so long since I switched on my gp2x :rolleyes:

EDIT: I already wrote about this here


Interesting. With the current setup and a 320x240x16bpp framebuffer alone you'll easily exhaust TLB mapping (never mind with multi-buffering). This would cause a mandatory page table walk at least every 6.4 scanlines (in addition to all the other "first times" in that frame). It'll also take a little more cache, assuming the page table entries end up in there. I wonder how much time a page walk takes?
 
Last edited by a moderator:
Exophase said:
Interesting. With the current setup and a 320x240x16bpp framebuffer alone you'll easily exhaust TLB mapping (never mind with multi-buffering). This would cause a mandatory page table walk at least every 6.4 scanlines (in addition to all the other "first times" in that frame). It'll also take a little more cache, assuming the page table entries end up in there. I wonder how much time a page walk takes?
A page table walk for 4 KB pages requires reading 2 words of 32 bit of data from RAM (not from cache) plus a few cycles. So that's indeed expensive.

I can only hope TI and/or Pandora kernel developpers took care of that...
 
Last edited by a moderator:
Laurent said:
Exophase said:
Interesting. With the current setup and a 320x240x16bpp framebuffer alone you'll easily exhaust TLB mapping (never mind with multi-buffering). This would cause a mandatory page table walk at least every 6.4 scanlines (in addition to all the other "first times" in that frame). It'll also take a little more cache, assuming the page table entries end up in there. I wonder how much time a page walk takes?
A page table walk for 4 KB pages requires reading 2 words of 32 bit of data from RAM (not from cache) plus a few cycles. So that's indeed expensive.

I can only hope TI and/or Pandora kernel developpers took care of that...


Expensive enough, but considering that every framebuffer write out is going to RAM for every application written it's not so much. Granted, that's leveraged by the write buffer but unless the code is very computationally bound that can only do so much.

Keeping it in cache doesn't seem like it'd be all that great anyway unless you have a ton of cache (it'd definitely be even worse this way on GP2X, since that cache is not going to live any longer than the TLB will). What would be cool is if an ARM with out of order completion implements a non-blocking, hit-under-miss TLB (to complement a cache with those features).

(too bad Cortex-A8 implements a blocking cache, which is one area in which it's at a bit of a step down from ARM11.. hopefully the L2 cache will alleviate a lot of that)
 
Last edited by a moderator:
Exophase said:
Laurent said:
e
A page table walk for 4 KB pages requires reading 2 words of 32 bit of data from RAM (not from cache) plus a few cycles. So that's indeed expensive.

I can only hope TI and/or Pandora kernel developpers took care of that...
Expensive enough, but considering that every framebuffer writ out is going to RAM for every application written it's not so much. Granted, that's leveraged by the write buffer but unless the code is very computationally bound that can only do so much.
Stores have basically no latency (with the write buffer), while reads have, so the penalty for PTW will 2 x memory read latency (the 2 words read are not in the same place at all).

I can't remember how many bytes the WB of arm920 has, and I don't know what memory latency is on the gp2x. If we had these two information, we could estimate the cost of the TLB misses. Another way would be to hack mmu hack to have sections and compare to 4 KB pages... if only I had some free time :p

QUOTE
Keeping it in cache doesn't seem like it'd be all that great anyway unless you have a ton of cache (it'd definitely be even worse this way on GP2X, since that cache is not going to live any longer than the TLB will).

I was talking about having TTB entries (TTB entries are the 32 bit data words used by the MMU during a PTW) inside the cache. If you are doing some work with the framebuffer that doesn't need to access too many other memory areas, then it's a win especially if you do non sequential accesses to your framebuffer.

QUOTE
What would be cool is if an ARM with out of order completion implements a non-blocking, hit-under-miss TLB (to complement a cache with those features).

(too bad Cortex-A8 implements a blocking cache, which is one area in which it's at a bit of a step down from ARM11.. hopefully the L2 cache will alleviate a lot of that)


I will refrain from commenting on that :ph34r:
 
Last edited by a moderator:
Laurent said:
Stores have basically no latency (with the write buffer), while reads have, so the penalty for PTW will 2 x memory read latency (the 2 words read are not in the same place at all).
I disagree. Write outs to the framebuffer are done at the very least a scanline at a time, and it'd take an enormous writebuffer to handle all of that. After the first few pixels the write buffer will be full and will be constantly writing back to SDRAM. Only if the distance between writes is greater than the number of cycles it takes to write to SDRAM will the user experience no latency, but I imagine this is a fair number of cycles.

Laurent said:
I can't remember how many bytes the WB of arm920 has, and I don't know what memory latency is on the gp2x. If we had these two information, we could estimate the cost of the TLB misses. Another way would be to hack mmu hack to have sections and compare to 4 KB pages... if only I had some free time :p
16 words, 4 addresses. Doesn't seem to do any merging, but I've seen conflicting results that I just can't really explain (like having 32bit writes instead of 2x 16bit writes not improving performance). I'd love to know how fast SDRAM is exactly, I think one person here said about 10 cycles. We know what the bus cycle parameters (ie, RAS/CAS timings, etc) can be set to (I'm sure it'll be much lower overall for craigix's timings), and we know that a SDRAM bus cycle is 2 CPU clock cycles. With some looking into the SDRAM spec we should at least be able to determine a minimum time.

Laurent said:
I was talking about having TTB entries (TTB entries are the 32 bit data words used by the MMU during a PTW) inside the cache. If you are doing some work with the framebuffer that doesn't need to access too many other memory areas, then it's a win especially if you do non sequential accesses to your framebuffer.
Do you mean the level 1 page entries? I agree, that could be a win. But better than caching them would be to just give them another table altogether (... maybe).

Laurent said:
I will refrain from commenting on that :ph34r:
Maybe you know something I don't B)
 
Last edited by a moderator:
Exophase said:
Laurent said:
Stores have basically no latency (with the write buffer), while reads have, so the penalty for PTW will 2 x memory read latency (the 2 words read are not in the same place at all).
I disagree. Write outs to the framebuffer are done at the very least a scanline at a time, and it'd take an enormous writebuffer to handle all of that. After the first few pixels the write buffer will be full and will be constantly writing back to SDRAM. Only if the distance between writes is greater than the number of cycles it takes to write to SDRAM will the user experience no latency, but I imagine this is a fair number of cycles.
That's partly true. If the write-buffer was doing merging and/or if you had to do some computations before writing then writes would come for free.

Anyway what I meant when I said stores are free, is that your CPU pipeline won't be blocked (if the write buffer is not saturated), whereas for reads the reader (be it the CPU or the MMU) *has* to stall.

QUOTE
16 words, 4 addresses. Doesn't seem to do any merging, but I've seen conflicting results that I just can't really explain (like having 32bit writes instead of 2x 16bit writes not improving performance). I'd love to know how fast SDRAM is exactly, I think one person here said about 10 cycles. We know what the bus cycle parameters (ie, RAS/CAS timings, etc) can be set to (I'm sure it'll be much lower overall for craigix's timings), and we know that a SDRAM bus cycle is 2 CPU clock cycles. With some looking into the SDRAM spec we should at least be able to determine a minimum time.

Never trust specs, do your own measures ;)

QUOTE
Laurent said:
I was talking about having TTB entries (TTB entries are the 32 bit data words used by the MMU during a PTW) inside the cache. If you are doing some work with the framebuffer that doesn't need to access too many other memory areas, then it's a win especially if you do non sequential accesses to your framebuffer.
Do you mean the level 1 page entries? I agree, that could be a win. But better than caching them would be to just give them another table altogether (... maybe).
Both level 1 and level 2 could be cached (and should).

What do you mean by "another table"? Samsung did something like that for some of their SC chips: they have some dedicated memory on chip to hold the first level TTB. Is that what you meant?

QUOTE
Laurent said:
I will refrain from commenting on that :ph34r:
Maybe you know something I don't B)
Due to my day job, yes :D
 
Last edited by a moderator:
Laurent said:
That's partly true. If the write-buffer was doing merging and/or if you had to do some computations before writing then writes would come for free.

Anyway what I meant when I said stores are free, is that your CPU pipeline won't be blocked (if the write buffer is not saturated), whereas for reads the reader (be it the CPU or the MMU) *has* to stall.
All things would indicate that it's not doing any kind of merging/coalescing, although I wish it were >_> I think this is an ARM10 feature. Write buffering pretty much always helps, could help entirely but you'd need that many computations. Anyway, we both understand how it works so.. yeah.

Too bad it's not ARM11/XScale, there even the loads are not blocking (so long as nothing uses the read value, reads that cache line again, and the read queue doesn't fill up with cache misses too quickly).


Laurent said:
Never trust specs, do your own measures ;)
Now if only my own measurements made any kind of sense half the time. D: But yeah, absolutely. I'll see if I get around to doing that sometime.

Laurent said:
Both level 1 and level 2 could be cached (and should).
Since dcache only spans 16KB and DTLB spans 256KB normally, I would expect that normally (except for very random accesses) the latter would outlive the former, making it likely that those level 2 entries in cache won't be there when you need them again. Would still be a win on spatial locality though.

Laurent said:
What do you mean by "another table"? Samsung did something like that for some of their SC chips: they have some dedicated memory on chip to hold the first level TTB. Is that what you meant?
Yes, absolutely.

Laurent said:
Due to my day job, yes :D
I wish I had a job like that. If you guys could use someone like me let me know.. ah pipe dreams.
 
Last edited by a moderator:
Exophase said:
All things would indicate that it's not doing any kind of merging/coalescing, although I wish it were >_> I think this is an ARM10 feature. Write buffering pretty much always helps, could help entirely but you'd need that many computations. Anyway, we both understand how it works so.. yeah.

I suddenly realize you mentionned this
QUOTE
The write buffer can hold up to 16 words of data and four separate addresses.

This means merging has to occur or it wouldn't make sense ;)
My understanding is that the write buffer holds 4 lines of 128 bits and since 128 bit entities don't exist, it means the WB merges.

QUOTE
I wish I had a job like that. If you guys could use someone like me let me know.. ah pipe dreams.
Look on ARM website, there are often opened positions. I hope you know some Verilog ;)
 
Last edited by a moderator:
Laurent said:
I suddenly realize you mentionned this
QUOTE
The write buffer can hold up to 16 words of data and four separate addresses.

This means merging has to occur or it wouldn't make sense ;)
My understanding is that the write buffer holds 4 lines of 128 bits and since 128 bit entities don't exist, it means the WB merges.

You're forgetting about block stores (stm), those can essentially be seen as multi-word stores. It's for these that the write buffer is more than 4 words wide.

It's possible that it's merging and it just doesn't say on the spec sheet, although ARM10 specifically mentions introducing coalescing. I could see merging as being simpler than coalescing (only applying to adjacent stores that happen to be sequential). At first I thought it might be performing this because my 32bit writes were not faster than my 2x 16bit writes, and they should have been since the bus is supposed to be 32bits wide (nevermind the benefit of sequential stores over non-sequential ones). This was of course to non-cached regions.

Then Trenki told me that his simple loop of 16bit writes to clear the framebuffer was twice as slow as his loop of 32bit writes. If the write buffer was merging then I'd expect them to be the same speed, or at least not exactly 2x different. The overhead of looping twice the amount should have been at least partially (but more likely fully) shadowed by the write buffer drains.

Laurent said:
Look on ARM website, there are often opened positions. I hope you know some Verilog ;)
Nope, no Verilog. I'm a low level programmer, not an engineer (I haven't even taken a class where students design CPUs, although I wish I had). Do you think I could learn? Or is that way too out of my league?

I'd probably be best for writing firmware/driver/BIOS code and emulators/simulators and what have you.
 
Last edited by a moderator:
Exophase said:
my 32bit writes were not faster than my 2x 16bit writes
May be you were accidently doing unaligned writes there? As far as I remember, I was surely getting better results with words then halfwords in my projects. I've confirmed that you are always getting an exception on unaligned accesses (regardless of kernel settings, I think ARM920 doesn't even allow to disable alignment exceptions), kernel will at least increment a counter (simple variable, not some hardware counter), see here, do_alignment() function.

EDIT: The ARM920 pdf says alignment checking can be disabled, but it doesn't seem to be done by GP2X Linux.
 
Last edited by a moderator:
Exophase said:
Laurent said:
I suddenly realize you mentionned this
QUOTE
The write buffer can hold up to 16 words of data and four separate addresses.

This means merging has to occur or it wouldn't make sense ;)
My understanding is that the write buffer holds 4 lines of 128 bits and since 128 bit entities don't exist, it means the WB merges.

You're forgetting about block stores (stm), those can essentially be seen as multi-word stores. It's for these that the write buffer is more than 4 words wide.
You are playing a guessing game here. Do you know what the interface width between the CPU and the data module is? If I were to believe you it would have to be 128 bit wide, which I doubt :)

QUOTE
Then Trenki told me that his simple loop of 16bit writes to clear the framebuffer was twice as slow as his loop of 32bit writes. If the write buffer was merging then I'd expect them to be the same speed, or at least not exactly 2x different. The overhead of looping twice the amount should have been at least partially (but more likely fully) shadowed by the write buffer drains.

It's not obvious to conclude anything without seeing the (generated?) asm code involved in both cases.

QUOTE
Nope, no Verilog. I'm a low level programmer, not an engineer (I haven't even taken a class where students design CPUs, although I wish I had). Do you think I could learn? Or is that way too out of my league?

I'd probably be best for writing firmware/driver/BIOS code and emulators/simulators and what have you.
Look for job postings regularly, there are software positions posted sometimes. The problem might be your lack of diploma and of demonstrated professional experience. But check ARM site :) This for instance http://www.arm.com/employment/19746.html
 
Last edited by a moderator:
Laurent said:
You are playing a guessing game here. Do you know what the interface width between the CPU and the data module is? If I were to believe you it would have to be 128 bit wide, which I doubt :)
Come again? The interface width between the CPU and the RAM is 32bits, is it not? I don't see why what I'm saying is implying 128bits wide memory. Storing block moves like this on the write buffer allows it to hold more before it needs to be drained (and therefore will block additional requests to add to it). Because the ARM AMBA bus used differentiates sequential and non-sequential accesses a 128 bit block transfer will still be potentially more efficient than 4 32bit ones. Otherwise why would it benefit the write buffer to merge in the first place?

Anyway, I decided to look at the 940T manual instead and found some interesting things:
- 8 words/4 non-sequential addresses instead of 16/4 (but I'm certain the 920T is the latter)
- Explicitly states non-merging:

"The write buffer is non-merging, so even if two separate buffered external memory
writes are performed that are sequentially related, they still take two address locations
within the buffer, and are treated as nonsequential accesses. This is also true for
non-word writes to the same word address. In this instance two address and two data
locations are used in the write buffer."

It then goes on to give an example using STMs:

"The write buffer splits any accesses caused by an STM instruction on 4-word boundaries.
Each set of words uses one address location in the write buffer. (some bits about protection removed)

Figure 4-3 shows the write buffer behavior for the following code sequence:
MOV r11, #0x10c ; set pointer
MOV r12, #0x20c ; set pointer
STMIA r11, {r0-r5} ; store 6 registers
STMIA r12, {r6-r10} ; store 5 registers

In this code, a pointer has been set to address 0x10C. A store multiple of six registers is
then executed. This instruction uses six data registers, and three address registers within
the write buffer. "

This demonstrates what I mean about stm's using more than one word per address slot. Another important application I failed to mention is for cache lines writebacks, which also use four words per address line because of their sequential nature.

Laurent said:
It's not obvious to conclude anything without seeing the (generated?) asm code involved in both cases.
We know that the 940T is non-merging so it's very likely that the 920T isn't as well, although I do wonder why it wasn't mentioned in the manual. The code was hand written ASM, what I can tell you is that it used no more instructions nor should it have used any more computational cycles than the original: instead of having two strh instructions it had an orr and an str. Every 8 pixels there was a gap for reading the tile information though, so this would give the write buffer additional time to drain. Still, it should have only been capable of storing 4 pixels on the write buffer with the first variant, and 8 with the second, and a single 32bit non-sequential write is supposed to be faster than two 16bit non-sequential writes (which they'd have to be due to lack of merging, unless 920T really is doing so). Something might have been off in my tests somehow, it's probably worth looking deeper into.

Laurent said:
Look for job postings regularly, there are software positions posted sometimes. The problem might be your lack of diploma and of demonstrated professional experience. But check ARM site :) This for instance http://www.arm.com/employment/19746.html



Please don't get the wrong idea, I have an MS in Computer Science and some professional experience. I just don't have much engineering education/experience (so a software position would be more suitable as you mentioned). Unfortunately I can't access the listing (403 permission error), maybe I have to register/log in somewhere.
 
Last edited by a moderator:
Exophase said:
Laurent said:
You are playing a guessing game here. Do you know what the interface width between the CPU and the data module is? If I were to believe you it would have to be 128 bit wide, which I doubt :)
Come again? The interface width between the CPU and the RAM is 32bits, is it not? I don't see why what I'm saying is implying 128bits wide memory. Storing block moves like this on the write buffer allows it to hold more before it needs to be drained (and therefore will block additional requests to add to it). Because the ARM AMBA bus used differentiates sequential and non-sequential accesses a 128 bit block transfer will still be potentially more efficient than 4 32bit ones. Otherwise why would it benefit the write buffer to merge in the first place?
Oh I see where the misunderstanding comes from: I am not talking about 128 bit interface to memory, but from CPU to its internal memory subsystem, where the write buffer is. My point is that if you have 16 words for 4 addresses then some merging *has* to occur because I doubt the arm9tdmi uses 128 bit busses to talk to its memory subsystem. So perhaps the merging only occurs for stm, but it's merging nonetheless.

(skip the 940T behaviour which seems to demonstrate what I call (probably exaggerating) merging)

QUOTE
This demonstrates what I mean about stm's using more than one word per address slot. Another important application I failed to mention is for cache lines writebacks, which also use four words per address line because of their sequential nature.

Right. I call it merging, but it's indeed limited merging :)

QUOTE
We know that the 940T is non-merging so it's very likely that the 920T isn't as well, although I do wonder why it wasn't mentioned in the manual. The code was hand written ASM, what I can tell you is that it used no more instructions nor should it have used any more computational cycles than the original: instead of having two strh instructions it had an orr and an str. Every 8 pixels there was a gap for reading the tile information though, so this would give the write buffer additional time to drain. Still, it should have only been capable of storing 4 pixels on the write buffer with the first variant, and 8 with the second, and a single 32bit non-sequential write is supposed to be faster than two 16bit non-sequential writes (which they'd have to be due to lack of merging, unless 920T really is doing so). Something might have been off in my tests somehow, it's probably worth looking deeper into.

Well according to what you say we can perhaps safely conclude that "merging" is limited to stm.

QUOTE
http://www.arm.com/employment/19746.html

Please don't get the wrong idea, I have an MS in Computer Science and some professional experience. I just don't have much engineering education/experience (so a software position would be more suitable as you mentioned). Unfortunately I can't access the listing (403 permission error), maybe I have to register/log in somewhere.


Sorry if I seemed to under estimate you. You should know that for us French people "engineer" is not limited to "electronic engineer", it's a more generic term that describes a technical level of education. So when you wrote "I'm not an engineer" I got it wrong :D

That's strange the link works for me from home. Basically it's a graduate position, you are perhaps over qualified already :)
Here is an extract:
QUOTE
* Porting of Linux kernel to the latest ARM processors and the development of device drivers for new peripherals.
* Specification of new applications in collaboration with senior engineers.
* Development of these applications using the ARM RealView Tools.
* Debugging and verification of designs in simulation and on boards using the latest ARM RealView debug and trace tools.
* Creation of applications notes and working with Technical Publications to create User Guides.

The job is in Cambridge.
Anyway my point was to show that ARM has positions in computer science.
 
Last edited by a moderator:
Laurent said:
Oh I see where the misunderstanding comes from: I am not talking about 128 bit interface to memory, but from CPU to its internal memory subsystem, where the write buffer is. My point is that if you have 16 words for 4 addresses then some merging *has* to occur because I doubt the arm9tdmi uses 128 bit busses to talk to its memory subsystem. So perhaps the merging only occurs for stm, but it's merging nonetheless.

(skip the 940T behaviour which seems to demonstrate what I call (probably exaggerating) merging)

Right. I call it merging, but it's indeed limited merging :)

Well according to what you say we can perhaps safely conclude that "merging" is limited to stm.
Ah, I see what you mean. The cycle timings indicate 1N (for the first) + 1S for each remaining STM register stored, since they'd have to be 32bits like you said you're right, the write buffer would have to be accumulating (or merging) them. I guess I was looking at it from more of a software POV than the bus itself.

I wonder what the width between cache and the write buffer is, since they're more tightly coupled. I figure it'd want to get an evicted cache line on it as soon as possible

Off topic, but ARM11 appears to have a 64bit interface and can do two LDM/STM stages at once, although that might just apply to cache. Of course Cortex-A8 can do this as well, I assume it has dual ported cache.

Laurent said:
Sorry if I seemed to under estimate you. You should know that for us French people "engineer" is not limited to "electronic engineer", it's a more generic term that describes a technical level of education. So when you wrote "I'm not an engineer" I got it wrong :D
Heh, so I see. I think it's like that here too, only probably more specific. I could get a position as a "software engineer" (which is a valid title but a lot of people don't consider that real engineering), but I'd be harder pressed to get one as a "computer engineer." Thanks for the extract, jobs like that look pretty nice. I wonder what caused the permissions problem o_o
 
Last edited by a moderator:
Back
Top