Cortex-A15 Details


Exophase

Nothing good will ever come of Exophase.
Joined
Sep 21, 2006
Messages
10,307
Age
41
Location
Cleveland OH
I stumbled upon a link to a Cortex-A15 presentation PDF on ARM's site today. Unfortunately the PDF has since been pulled, but you can see Google's cached HTML conversion of it here:

http://www.google.com/search?q=Cortex-A9+fetch+dispatch+retire&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:eek:fficial&client=firefox-a#sclient=psy&hl=en&client=firefox-a&hs=iq0&rls=org.mozilla:en-US%3Aofficial&source=hp&q=Cortex-A15_Technical_Overview.pdf&aq=f&aqi=&aql=&oq=&gs_rfai=&pbx=1&fp=53894057c0dc71cc

Formatting is not pretty, but there are some interesting details in there. I was hoping someone else might be interested too.. at least Ari64 should be ;)

It actually contains some information about Cortex-A9 that I haven't seen much elsewhere.

Some things that stand out to me:

- Pipeline is much longer, but improvements to branch prediction have been made, including "out of order resolution" which supposedly reduces cost of mispredictions
- 128-bit fetch, including on unaligned addresses, so no need to try aligning your branch targets now. Also, an entire cacheline isn't dragged in, only the fetched bytes.
- 1 load + store per cycle.. was hoping for 2 loads as well, but this is still a good improvement
- Cortex-A9 can actually dispatch 3 integer instructions per cycle, although only one is out of order.. what this means is you might be able to more easily recover lost issue cycles from things like fused shifts
- NEON is dual issue and out of order/register renamed, plus more tightly coupled with the core than in A9.. I wonder if it'll still be optional? Says 4 quad-MAC per cycle - I'm going to guess this means they made the FPU 128-bit wide plus can issue other stuff simultaneously, not that they have two 64-bit wide SIMD FPUs.
- Load/store is "partial out of order" now, so it can do some kind of memory disambiguation. I always thought A9 had to have some load reordering in order for "hit under miss" cache to be relevant - guess not? It's "partial" because accesses cannot execute ahead of stores, ie it doesn't have any address disambiguation.
- Micro-BTB which helps reduce stall for fetch on taken branch - so here we have the second admission that there actually is a stall on Cortex-A8 and Cortex-A9. And it appears to be because of the size of the BTB (this mini one is 64 entries and fully associative, so it should be pretty effective). No wonder Intel used a 128-entry BTB for Atom...
- Indirect address prediction, ie now the BTB is indexed by the branch history too. Should be good for interpreters and stuff.
- L1 data cache is only 2-way set associative :(
- 32-entry loop buffer, bypasses fetch/decode stages.. I wonder if this means that a loop engine is used alongside it to prevent loops from taking BTB entires and mispredicting

Lots of information here, probably even more than we ever got for Cortex-A9. I can see why they pulled it. Of course, any of this is probably subject to change, but what's there looks pretty exciting.
 
Sounds good.

Well Ok I don't know wht the hell most of this means except that my emus will run smoother (On what I wouldn't know yet) so that is something to look forward to :)
 
Back
Top