Ari64 said:
I haven't seen any direct evidence that the fetch is stalled during d-cache misses. It's possible though. How did you determine this?
Actually I guess it wasn't determined, I just assumed it with the L1 hit vs miss tests earlier and cycle counts.. So I guess to best determine something like this we should try adding a branch (to next PC) and see if it only increases by 1 cycle?
Ari64 said:
There's a catch to aligning branches. The GHB is indexed by the low bits of the instruction address. If you put all branches at the end of a 64-bit block, then you're only using half of the GHB, and branch misprediction will increase due to collisions. That's what happens in ARM mode anyway, I didn't test thumb.
I guess it would be helpful to know which low order bits are used and how they're combined with the 10-bit history to form the 12-bit address - we do know that it only applies to the bottom four bits of the address and that a XOR is used. You don't necessarily lose half the GHB entries.
I should have clarified though, I only mean for branches that are very likely to be taken like loops. If anything, branches in a tight inner loop that are unlikely to be taken could be purposefully misaligned in order to avoid possible aliasing in predicting the loop branch.
Another note - this doesn't apply to unconditional branches since those don't access the GHB. The BTB will mark a branch as unconditional. We should actually check to see if an unconditional branch has the two cycle latency too, which has a major impact on function calls. This should apply to indirect branches too, including the return stack predicted function returns. Function returns might specifically avoid the stall because they bypass reading the value out of the BTB and instead from the much smaller return stack, although by this point it's probably moot since the BTB hit has already been resolved. It'd also be moot if the stall is due to predecoding the instruction to detect if it's a branch or not. Achievable throughput for indirect branches and returns can have a large impact on where it's suitable to use switches inside tight loops, and function calls over inlining can spare icache pressure for costs that are sometimes hidden by other delays, so this is worth looking into.
By the way, I'm thinking of buying "Unique Chips and Systems", which, as you probably recall, is the currently best known source for these implementation details. The Google books copy (http://books.google.com/books?id=RBmvxQ9BV6wC&printsec=frontcover#v=onepage&q&f=false) is only missing a few pages from the Cortex-A8 implementation section, but it could still be good information. What do you think, should I buy it? It can be had for ~$70.
Oh, and I guess I should stop looking for an official confirmation from ARM addressing the correctly predicted branch latency because this about sums up what you've said:
"The decoupling afforded by the queue allows the I-fetch unit to prefetch ahead of the rest of the integer unit and build up a backlog of instructions that are ready to be decoded. This backlog often hides the latency involved in predicting a change in the instruction stream and starting to fetch instructions from a new location."
Wish I paid more attention to that first time around. Too bad you lose that backlog on a branch mispredict. I wish branch mispredicts were resolved earlier in the pipeline. It's like they shifted the whole branch resolution part ahead just to get them to pair with condition code updates, but it cost them a good 2 cycles if they could otherwise guarantee PC results in E2 (let's say multiplies and saturations and whatever else E3 and E4 are used for aren't allowed to touch PC, I'm pretty sure the ISA says that it isn't supported)
I guess another benefit is that ldr pc is only two cycles instead of what you'd get if pc was resolved earlier due to load-use latency.. assuming it's properly predicted, which is a huge assumption. Still better for threaded switches.