Pandora A "new" Good Way To Program For The Pandora?


Laurent said:
It's because your code uses an undefined behaviour since it doesn't return a value for main.
Yeah, but I was referring to how it saved and restored r3 or r4 on the stack, which I assume was for alignment purposes. After changing the code to return zero, gcc 4.3.3 also aligns the stack, albeit in a less-efficient way:
Code:
main:
        @ args = 0, pretend = 0, frame = 0
        @ frame_needed = 0, uses_anonymous_args = 0
        str     lr, [sp, #-4]!
        ldr     r0, .L3
        sub     sp, sp, #4
        bl      puts
        mov     r0, #0
        add     sp, sp, #4
        ldmfd   sp!, {pc}

Laurent said:
So if you want movw/movt, you should use CSL compiler or perhaps gcc 4.4.n.
It seems that lacking movt isn't the only problem with gcc 4.3.3, which is the compiler used in the current version of Angstrom,
 
Last edited by a moderator:
When looking at quality of compiled functions you should look at more than just how it compiles main, because (in my experience) it's not always representative of how other functions will compile. I also don't know why you consider allocating 4 bytes on the stack to be aligning it..
 
Exophase said:
When looking at quality of compiled functions you should look at more than just how it compiles main, because (in my experience) it's not always representative of how other functions will compile. I also don't know why you consider allocating 4 bytes on the stack to be aligning it..
It allocates a total of 8 bytes on the stack, first pushing lr, and then four more bytes. It does this in every function, not just main. gcc uses the ldrd/strd instructions on stack data, so it wants (sp&7)==0.

The stack alignment is good, but the instruction sequences that it generates to do this could be better.
 
Last edited by a moderator:
Silly me, I had explained why it was doing so, but I probably accidentally deleted that part in my answer.

The ARM EABI requires the stack to be aligned to a multiple of 8. I agree gcc 4.3.3 does it in a stupid way, given that most advanced ARM processors have an internal 64-bit interface with cache.
 
On Cortex-A8 using 64-bit ldrd/strd isn't going to improve much except lower instruction count. Even though the path to dcache is 64-bits wide the first 32-bit element of a 2+ word transfer will be done in isolation. This is because Cortex-A8 statically schedules loads and can't determine misalignment ahead of time. So you only benefit from the 64-bit bus in > 2 word block transfers. It's IMO a pretty disappointing feature of the architecture which Cortex-A9 corrects, and is why I consider A9 to be combining a lot of the strengths of Cortex-A8 with the strengths of ARM11, along with new strengths and some new weaknesses.

In this case add/sub for the full allocation is probably the best choice, rather than folding in the increment into the memory. The CPU can probably schedule the load and store in parallel with the allocation. Not sure about the versions with post increment/decrement.
 
Exophase said:
On Cortex-A8 using 64-bit ldrd/strd isn't going to improve much except lower instruction count. Even though the path to dcache is 64-bits wide the first 32-bit element of a 2+ word transfer will be done in isolation. This is because Cortex-A8 statically schedules loads and can't determine misalignment ahead of time. So you only benefit from the 64-bit bus in > 2 word block transfers.
Where did you get that information from? It doesn't make sense to me, though it can be true.
 
Last edited by a moderator:
16.2.9 in the TRM.

"The number of registers in the register list usually determines the number of cycles
required to execute a load or store multiple instruction. The processor can load or store
two 32-bit registers in each cycle. However, to access 64 bits, the address must be 64-bit
aligned. Processor scheduling is static, and it is not possible to know the address
alignment at schedule time. Therefore, scheduling for the first transfer of loads and the
last transfer of stores must be done assuming the address might be unaligned."

The individual cycle timing counts for ldrd/strd and ldm/stm confirm this.
 
Exophase said:
16.2.9 in the TRM.

"The number of registers in the register list usually determines the number of cycles
required to execute a load or store multiple instruction. The processor can load or store
two 32-bit registers in each cycle. However, to access 64 bits, the address must be 64-bit
aligned. Processor scheduling is static, and it is not possible to know the address
alignment at schedule time. Therefore, scheduling for the first transfer of loads and the
last transfer of stores must be done assuming the address might be unaligned."
That section of the manual refers only to the ldm/stm instructions, however experimental testing confirms that this applies to the ldrd/strd instructions too. I've tested the ldrd and strd instructions and it takes the same amount of time whether aligned or unaligned.

So basically the stack alignment is useless on A8.
 
Last edited by a moderator:
Even if you don't take that section to refer to ldrd/strd it's pretty obvious that it would, and just above it ldrd and strd are listed as 2 cycles.

If you want 64-bit throughput w/o the one-off penalty on edges I guess NEON is the best option, if you can get things done in it. Mainly, if all your data is streaming and you don't need to compute pointers. If I'm not mistaken, NEON can transfer 128-bits per cycle if your data is aligned, which is pretty nice, especially with the pipeline layout that lets you stream from L2 cache with zero load-use cost.
 
Exophase said:
Even if you don't take that section to refer to ldrd/strd it's pretty obvious that it would, and just above it ldrd and strd are listed as 2 cycles.
Yeah but it's unclear since it's worded poorly. It says "immediate 64-bit offset" where it should say "64-bit load, immediate offset".
 
Last edited by a moderator:
Ari64 said:
Yeah but it's unclear since it's worded poorly. It says "immediate 64-bit offset" where it should say "64-bit load, immediate offset".

Haha yeah, I found that pretty awful too. I'd expect this of Samsung or some other non-English native company but I'd hope for better from ARM. At least that's documented, though. A few of the less common/more recent integer instructions aren't mentioned at all or with sufficient detail, and it doesn't go into great detail about branch mispredict penalties either. It also seems to list timing for some NEON instructions that I'm not sure even exist.
 
Last edited by a moderator:
Intel and AMD don't describe such things as branch predictors either. These are typically patented or completely secret.
 
Laurent said:
Intel and AMD don't describe such things as branch predictors either. These are typically patented or completely secret.

Actually, Cortex-A8's branch prediction is described indepth in an architectures book whose name I don't remember. It's not a free book but you can see half of it in samples, including most of the information of the BTB and global history etc. Anyway, what I was referring to was NOT the branch prediction hardware, just the exact penalty for a mispredict. ARM11 documents are pretty clear, for instance.
 
Last edited by a moderator:
Exophase said:
A few of the less common/more recent integer instructions aren't mentioned at all or with sufficient detail, and it doesn't go into great detail about branch mispredict penalties either.
Yeah, sxtb, sxth, uxth, etc are missing. The branch prediction oddities took me awhile to figure out, especially the issue where there is apparently only one BTB entry for every pair of instructions. If a conditional branch is immediately followed by another branch, it will be mispredicted.

Laurent said:
Intel and AMD don't describe such things as branch predictors either. These are typically patented or completely secret.
K10 has a similar issue where a conditional branch is followed by a ret instruction. This is documented in the Software Optimization Guide for AMD Family 10h Processors, section 6.2. ARM should have documented this issue, but they didn't.
 
Last edited by a moderator:
Ari64 said:
Laurent said:
Intel and AMD don't describe such things as branch predictors either. These are typically patented or completely secret.
K10 has a similar issue where a conditional branch is followed by a ret instruction. This is documented in the Software Optimization Guide for AMD Family 10h Processors, section 6.2. ARM should have documented this issue, but they didn't.
They describe part of it. But you don't get tricky details that can kill your performance very badly. For instance, do they describe aliasing issues in branch buffers due to the hashing function in use or due to index limitations?

But I agree there's a lack of optimization guide for ARM... The problem is that there are too many cores with vastly different micro-architectures (not even talking about different instruction sets).
 
Last edited by a moderator:
Exophase said:
Actually, Cortex-A8's branch prediction is described indepth in an architectures book whose name I don't remember. It's not a free book but you can see half of it in samples, including most of the information of the BTB and global history etc.

I hadn't run across that, where did you find it?

After experimenting with the branch predictor, this is my best guess as to how it works:

Each cycle, the instruction fetch retrieves 64 bits (2 instructions) from the L1 i-cache. The two instructions are decoded in parallel, but the L1 reads are always aligned, so if you have a jump to an odd-numbered location then it will only execute one instruction from that fetch. The branch predictor checks bits 25-27 to determine if either instruction is potentially a branch. Load instructions and register-register data processing instructions are considered possible branches. Data processing instructions with immediate values (bit 25 set) are assumed not to be branches. If only one instruction is a possible branch, then it uses the branch history and BTB to make a prediction, and instruction fetch continues from there. If both instructions are potential branches, then there is a delay of at least one cycle while it checks to see if either instruction is actually a branch, by checking if the instruction writes to r15. If only one instruction is an actual branch, then a prediction is made, but a pipeline bubble may occur due to the delay. If both instructions are actual branches, then it seems to mispredict at least one of them.

A consequence of this design is that instructions which are not branches, but which occur next to branches, can stall the branch predictor. For example a MOV reg,reg instruction following an unconditional branch can delay prediction of the branch. Replacing the MOV with ADD reg,reg,#0 seems to prevent this, since the branch predictor will ignore add instructions with immediate values.
 
Last edited by a moderator:
Sigh, I knew you'd make me go find it :(

It was probably this:

http://books.google....page&q=&f=false

See page 547. Like I said, this is just a preview. It's missing a lot of pages.

According to this, instructions are determined to be branching by their BTB entry, not by decoding the instruction - which makes sense, because you have to do a decent amount of decoding to get that far. It's a fairly minor difference though.
 
Exophase said:
According to this, instructions are determined to be branching by their BTB entry, not by decoding the instruction - which makes sense, because you have to do a decent amount of decoding to get that far. It's a fairly minor difference though.
That's probably true, but I suspect it can only look up one BTB entry per cycle, so it needs to do some minimal decoding (at least the first few bits) to determine which instructions are potentially branches.

There is clearly a stall when there are two instructions which might both be branches. It probably fetches the second BTB entry during that cycle.
 
Last edited by a moderator:
Ari64 said:
That's probably true, but I suspect it can only look up one BTB entry per cycle, so it needs to do some minimal decoding (at least the first few bits) to determine which instructions are potentially branches.

There is clearly a stall when there are two instructions which might both be branches. It probably fetches the second BTB entry during that cycle.

"Look up BTB entry" per here of course meaning "checks if it's a hit", since most instructions have no rightful business being in the BTB. I don't know if the issue is limit in access rate of the BTB, because I think that a one cycle stall on the second branch wouldn't be as serious as you're making this sound. I think it's more likely a collision in one of the prediction arrays. I'm voting against the BTB because it's not direct mapped. But the global historical buffer would hurt you if the two branches go opposite directions and share common low-enough order bits. I'm going to assume that they went opposite directions because it doesn't make much sense otherwise. Of course, Thumb-2 aggravates the issue by having more significant low end PC bits, and ARM does say that the performance with Thumb-2 is worse.

Can you test to see if the penalty is the same regardless of whether or the instructions are in the same 8 byte alignment?

What's really strange is there IS a section in the TRM addressing similar (but more mild) issues regarding branch proximity.
 
Last edited by a moderator:
I obviously can't confirm/deny what you say about branch predictors, but it should be noted that some Thumb-2 apps can be faster than ARM ones due to reduced Icache pressure, which for big apps can be the limiting factor.
 
Back
Top