Cpu For Pandora 2 ?


darkblu said:
I have a 32KB L1D/L1I-equipped A8, if something needs to be benchmarked.

It's up to Ari being willing to do this just to satisfy my curiosity.. hopefully his too. What kind of L2 and memory does it have?
 
Last edited by a moderator:
Exophase said:
It's up to Ari being willing to do this just to satisfy my curiosity.. hopefully his too. What kind of L2 and memory does it have?
it's an iMX515 with 256KB unified L2. RAM is 512MB LPDDR on an 64-bit AMBA AXI bus, maxed at 200MHz, but in this case i suspect it might be run at 166/333 (i have not looked up the actual specs of the RAM chips).

edit: nope, my bad - just checked some clock regs and DDR is actually run at 200MHz.

edit: FWIW, a quick test using memcpy from Android bionic libc shows the cache subsystem doing 765MB/s sustained from L2 and 1,524MB/s from L1.
 
Last edited by a moderator:
darkblu said:
Exophase said:
I don't have one to send, unfortunately. It's "only" $180 at Digi-key; do you think it'd be useful for you to have one? We could probably pretty easily raise that kind of money if you do. But I doubt we can just to satisfy my curiosity, unless I give you the whole amount ;)
I have a 32KB L1D/L1I-equipped A8, if something needs to be benchmarked.
I can send you the patches that I use to benchmark the dynamic recompiler, if you want. It's basically just some timers that add up how much time is spent in certain parts of the code.

Exophase said:
It's up to Ari being willing to do this just to satisfy my curiosity.. hopefully his too.
I'm curious too, but not enough to spend $180.
 
Last edited by a moderator:
darkblu said:
edit: FWIW, a quick test using memcpy from Android bionic libc shows the cache subsystem doing 765MB/s sustained from L2 and 1,524MB/s from L1.

Sounds like you're using a memcpy that's not optimized to use NEON. Particularly from L2 cache, this will make a huge difference. It'd be interesting to see your code and if possible a disassembly of the memcpy would be nice.

I'm assuming you're actually running at 800MHz. Correct me if this is wrong.

It should basically work out like this: the bus to L2 is 128-bits for NEON, or 64-bits for integer code. But 32-bit or even 64-bit loads will only perform 32-bit transactions. The bigger kicker is that integer code will always incur the 8-cycle latency to L2 cache, for a cache line of 16 words. So for 16 32-bit words the latency for a naive 32-bit load version should be 16 + 8 = 24 cycles = 2.67 bytes/cycle. Or 2133 MB/s; you're only getting about 71% of that, so I don't know what the deal is (it made more sense when I still thought cache lines were 8 words >_<)

For main memory misses you incur at least 20 cycles, plus however many wait states are necessary due to accessing main RAM. For 64-bits @ 2x @ 200MHz you're looking at a word a cycle, so fastest speed is 16 + 20 = 36 cycles = 1.7778 bytes/cycle or 1.422MB/s. Of course, the latency over the SoC's interconnects and out to RAM could be much worse.. maybe you could tell me what the CL of the RAM is, but at CL = 3 @ 200MHz that's something like 12 clock cycles = ~1GB/s.. so the numbers you get seem reasonable. They're a lot better than you get on OMAP3 with a generic memcpy.

Now, if the code uses NEON it can eliminate all of the latency to L2 cache and it can sustain 128-bit bandwidth, for a whopping 12.5GB/s. From L1 cache should be the same. From main RAM you can use prefetch and sustain 32-bit/cycle, for 3.2GB/s.

I can't say for sure without looking at the code that they didn't optimize it for Cortex-A8, but that certainly seems to be the case. All current Android phones are Cortex-A8, so unless there's a configuration problem on your end then I think Google should do better here... you can write a NEON optimized memcpy in a very short amount of time and the performance gains would be big...
 
Last edited by a moderator:
Exophase said:
Sounds like you're using a memcpy that's not optimized to use NEON. Particularly from L2 cache, this will make a huge difference. It'd be interesting to see your code and if possible a disassembly of the memcpy would be nice.

My code is as straight-forward as it gets:

Code:
unsigned r = kRepetitions; // we measure time through the clock_gettime() API, but we still want to aggregate individual tests for more robust measurements

do
{
    const unsigned chunk_size = 8 * (1 << 10); // chunks of 8KB, for the L1 case
    static char chunkA[chunk_size] __attribute__ ((aligned (16)));
    static char chunkB[chunk_size] __attribute__ ((aligned (16)));

    memcpy(chunkB, chunkA, sizeof(chunkB));
}
while (--r);

And here is the memcpy:

Code:
(gdb) stepi
0x400265d0 in memcpy () from ../../root/really_tiny_libc/libtinyc.so
(gdb) disas
Dump of assembler code for function memcpy:
0x400265d0 <memcpy+0>:  push    {r0, lr}
0x400265d4 <memcpy+4>:  pld     [r1]
0x400265d8 <memcpy+8>:  pld     [r1, #64]       ; 0x40
0x400265dc <memcpy+12>: cmp     r2, #16
0x400265e0 <memcpy+16>: bcc     0x40026694 <memcpy+196>
0x400265e4 <memcpy+20>: rsb     r3, r0, #0
0x400265e8 <memcpy+24>: ands    r3, r3, #15
0x400265ec <memcpy+28>: beq     0x4002662c <memcpy+92>
0x400265f0 <memcpy+32>: sub     r2, r2, r3
0x400265f4 <memcpy+36>: lsls    r12, r3, #31
0x400265f8 <memcpy+40>: ldrbmi  lr, [r1], #1
0x400265fc <memcpy+44>: strbmi  lr, [r0], #1
0x40026600 <memcpy+48>: ldrbcs  r12, [r1], #1
0x40026604 <memcpy+52>: ldrbcs  lr, [r1], #1
0x40026608 <memcpy+56>: strbcs  r12, [r0], #1
0x4002660c <memcpy+60>: strbcs  lr, [r0], #1
0x40026610 <memcpy+64>: lsls    r12, r3, #29
0x40026614 <memcpy+68>: bge     0x40026620 <memcpy+80>
0x40026618 <memcpy+72>: vld4.8  {d0[0],d1[0],d2[0],d3[0]}, [r1]!
0x4002661c <memcpy+76>: vst4.8  {d0[0],d1[0],d2[0],d3[0]}, [r0, :32]!
0x40026620 <memcpy+80>: bcc     0x4002662c <memcpy+92>
0x40026624 <memcpy+84>: vld1.8  {d0}, [r1]!
0x40026628 <memcpy+88>: vst1.8  {d0}, [r0, :64]!
0x4002662c <memcpy+92>: pld     [r1]
0x40026630 <memcpy+96>: pld     [r1, #64]       ; 0x40
0x40026634 <memcpy+100>:        subs    r2, r2, #64     ; 0x40
0x40026638 <memcpy+104>:        bcc     0x40026664 <memcpy+148>
0x4002663c <memcpy+108>:        pld     [r1, #128]      ; 0x80
0x40026640 <memcpy+112>:        pld     [r1, #192]      ; 0xc0
0x40026644 <memcpy+116>:        pld     [r1, #256]      ; 0x100
0x40026648 <memcpy+120>:        vld1.8  {d0-d3}, [r1]!
0x4002664c <memcpy+124>:        vld1.8  {d4-d7}, [r1]!
0x40026650 <memcpy+128>:        pld     [r1, #256]      ; 0x100
0x40026654 <memcpy+132>:        subs    r2, r2, #64     ; 0x40
0x40026658 <memcpy+136>:        vst1.8  {d0-d3}, [r0, :128]!
0x4002665c <memcpy+140>:        vst1.8  {d4-d7}, [r0, :128]!
0x40026660 <memcpy+144>:        bcs     0x40026648 <memcpy+120>
0x40026664 <memcpy+148>:        add     r2, r2, #64     ; 0x40
0x40026668 <memcpy+152>:        subs    r2, r2, #32
0x4002666c <memcpy+156>:        bcc     0x40026680 <memcpy+176>
0x40026670 <memcpy+160>:        vld1.8  {d0-d3}, [r1]!
0x40026674 <memcpy+164>:        subs    r2, r2, #32
0x40026678 <memcpy+168>:        vst1.8  {d0-d3}, [r0, :128]!
0x4002667c <memcpy+172>:        bcs     0x40026670 <memcpy+160>
0x40026680 <memcpy+176>:        add     r2, r2, #32
0x40026684 <memcpy+180>:        tst     r2, #16
0x40026688 <memcpy+184>:        beq     0x40026694 <memcpy+196>
0x4002668c <memcpy+188>:        vld1.8  {d0-d1}, [r1]!
0x40026690 <memcpy+192>:        vst1.8  {d0-d1}, [r0, :128]!
0x40026694 <memcpy+196>:        lsls    r12, r2, #29
0x40026698 <memcpy+200>:        bcc     0x400266a4 <memcpy+212>
0x4002669c <memcpy+204>:        vld1.8  {d0}, [r1]!
0x400266a0 <memcpy+208>:        vst1.8  {d0}, [r0]!
0x400266a4 <memcpy+212>:        bge     0x400266b0 <memcpy+224>
0x400266a8 <memcpy+216>:        vld4.8  {d0[0],d1[0],d2[0],d3[0]}, [r1]!
0x400266ac <memcpy+220>:        vst4.8  {d0[0],d1[0],d2[0],d3[0]}, [r0]!
0x400266b0 <memcpy+224>:        lsls    r12, r2, #31
0x400266b4 <memcpy+228>:        ldrbmi  r3, [r1], #1
0x400266b8 <memcpy+232>:        ldrbcs  r12, [r1], #1
0x400266bc <memcpy+236>:        ldrbcs  lr, [r1], #1
0x400266c0 <memcpy+240>:        strbmi  r3, [r0], #1
0x400266c4 <memcpy+244>:        strbcs  r12, [r0], #1
0x400266c8 <memcpy+248>:        strbcs  lr, [r0], #1
0x400266cc <memcpy+252>:        pop     {r0, lr}
0x400266d0 <memcpy+256>:        bx      lr
0x400266d4 <memcpy+260>:        nop     {0}
0x400266d8 <memcpy+264>:        nop     {0}
0x400266dc <memcpy+268>:        nop     {0}
End of assembler dump.
Looks quite sane to me, with prefetches and all *shrug*. And yes - i'm running it all at 800MHz.

It should basically work out like this: the bus to L2 is 128-bits for NEON, or 64-bits for integer code. But 32-bit or even 64-bit loads will only perform 32-bit transactions. The bigger kicker is that integer code will always incur the 8-cycle latency to L2 cache, for a cache line of 16 words. So for 16 32-bit words the latency for a naive 32-bit load version should be 16 + 8 = 24 cycles = 2.67 bytes/cycle. Or 2133 MB/s; you're only getting about 71% of that, so I don't know what the deal is (it made more sense when I still thought cache lines were 8 words >_<)
Huh, A8's L2 cacheline isn't 32 bytes? *goes to read docs* Indeed!

For main memory misses you incur at least 20 cycles, plus however many wait states are necessary due to accessing main RAM. For 64-bits @ 2x @ 200MHz you're looking at a word a cycle, so fastest speed is 16 + 20 = 36 cycles = 1.7778 bytes/cycle or 1.422MB/s. Of course, the latency over the SoC's interconnects and out to RAM could be much worse.. maybe you could tell me what the CL of the RAM is, but at CL = 3 @ 200MHz that's something like 12 clock cycles = ~1GB/s.. so the numbers you get seem reasonable. They're a lot better than you get on OMAP3 with a generic memcpy.
No idea about CL. I quickly browsed through the related SoC regs yesterday, but it'll take more than than to trace the CL settings.

Now, if the code uses NEON it can eliminate all of the latency to L2 cache and it can sustain 128-bit bandwidth, for a whopping 12.5GB/s. From L1 cache should be the same. From main RAM you can use prefetch and sustain 32-bit/cycle, for 3.2GB/s.
I'd love to get those numbers! Even though they look a bit out of reach to me at this stage : )

I can't say for sure without looking at the code that they didn't optimize it for Cortex-A8, but that certainly seems to be the case. All current Android phones are Cortex-A8, so unless there's a configuration problem on your end then I think Google should do better here... you can write a NEON optimized memcpy in a very short amount of time and the performance gains would be big...
Well, the one from Andriod performs times better than the bog-standard one from glibc - that one can make a grown man cry.
 
Last edited by a moderator:
Ah, okay, it's memcpy so we're looking at read + write, so if you were measuring bytes of that it should be be double what you gave for bandwidth. In which case, the memory bandwidth one is looking pretty good and about what you'd expect. The code looks pretty good, but the vld1s appear to be unaligned? Unless I'm missing something that makes them implicitly aligned that should add another cycle of issue..

A memset would be good for write-only. For read-only.. I don't really know how to get that into a C call, since really you want to read straight into a register (so by itself not very useful). Probably the sum of the two in isolation will be better than the memcpy.

What numbers did you get for the one that fits into L1? What size did you use for L2?
 
darkblu said:
No idea about CL. I quickly browsed through the related SoC regs yesterday, but it'll take more than than to trace the CL settings.
The i.MX51 only supports a CAS latency of 3 cycles, so there's no corresponding reg.
 
Last edited by a moderator:
Okay, I did some tests on Pandora. I think I had some misconceptions...

For NEON streaming from L1 cache:

I can't find a way to exceed loading 64 bytes/7 cycles (instead of the expected 64 bytes/4 cycles)

At first it looked like 64 bytes/8 cycles. Then I found that I seemed to be limiting myself to 2 cycles between loads due to AGI stalls from incrementing the same base pointer with only loading 16 bytes. By either using separate pointers, loading 32 bytes, or not incrementing at all, I get 7 cycles instead. Breaking things up into separate 128-bit loads and padding nops between them shows that, indeed, every 4th load seems to be pairing up and completing in one cycle while all the other ones take two. I can't really make sense of this. But it also shows that at least the load cost comes after the load, making it effectively non-blocking.

So the max here would be something like 6.8 GB/s.

Writes seem to be at 64 bytes/9 cycles. Unfortunately I had to perform a block of loads before a block of stores then subtract the load cost.. otherwise the speed would go way down, probably because the thing wouldn't stay in cache (task switches are likely clearing cache, which would also account for the residual cycles I see)

Reading + writing gives me 32 bytes/5 cycles (or 64 bytes/10 cycles) no matter how I do it (that's 16 bytes of read, 16 bytes of write), so it doesn't seem to save to interleave them - that's 128 bytes/20 cycles vs 128 bytes/(7 + 9) cycles, so yeah.

For NEON streaming from L2 cache:

Note that L1 and L2 cache are exclusive, so when you test for a buffer between L1 and L2 size you're really testing the average performance between both. Also, tests beyond 128KB are a bad idea because that starts thrashing the TLB. I think there's a little more at work here, since I'm getting values kind of in between what I would expect for inclusive and exclusive. I think what it's looking like is something like 32 bytes/15 cycles, or about 2.133 bytes/cycle. To compare with darkblu's I need to know what sizes he used.

Accesses to NEON are supposed to hide L2 latency, so you'd think you could get the same performance as from L1 if the bandwidth is the same. I don't think they really hide L2 latency, in that I don't think the L2 accesses are pipelined; it just hides it from blocking non-load/store operations.

What I'm initially seeing is something like 64 bytes/31-32 cycles. Here's the odd thing: when I add a single dead cycle between two 32-byte loads I see the performance go down. But when I add two cycles worth of activity, the performance actually goes back up to what it was originally. And I saw something similar with pairs of 16-byte loads, only it looked like I was getting closer to 32 bytes/15 cycles, then 32 bytes/13 cycles with 2-3 nops between them, then higher with more. So smaller loads might actually improve overall throughput and reduce blocking... I can't really explain all of it right now.

But the effective bandwidth was something like 2-2.5 bytes/cycle, which would be 1.5GB/s to 1.86GB/s, just for loads. I don't know how well it comes with stores interleaved with that, didn't try (and I don't know if the Android code is pulling better performance somehow)

Probably best if someone else tries to reproduce this and examine it more.
 
Exophase said:
Ah, okay, it's memcpy so we're looking at read + write, so if you were measuring bytes of that it should be be double what you gave for bandwidth. In which case, the memory bandwidth one is looking pretty good and about what you'd expect. The code looks pretty good, but the vld1s appear to be unaligned? Unless I'm missing something that makes them implicitly aligned that should add another cycle of issue..
The irony is, the night I first ran the test I thought to myself, 'that BW should be doubled as it's a r/w test', and then I fell asleep. The thought of that did not return to me until you brought it up now. So yeah, all original numbers should be doubled.

A memset would be good for write-only. For read-only.. I don't really know how to get that into a C call, since really you want to read straight into a register (so by itself not very useful). Probably the sum of the two in isolation will be better than the memcpy.

What numbers did you get for the one that fits into L1? What size did you use for L2?
So, today I sat down to compose a proper BW table as a function of mem chunk size. I guess the universe had other plans, because havoc ensued.

For some mysterious reason LD_PRELOAD does not work anymore for my test. Luckily, I had the sources to the Android libs, so a simple rename and rebuild sufficed. What is even weirder, I cannot get anymore a definitive difference in BW between what should be L1 and L2, and that really boggles me - it's as if L1 cache vanished. So that 1.5GB/s r/w measure from the other day is irreproducible now :/ The only thing I can think of happening between now and then is a system crash, but fsck did not discover anything broken on any of the fs'es that were live at the moment.

Anyhow, below is the test code, followed by the latest test results.

Code:
#include <stdio.h>

extern "C"
void* memcpy_neon(void*, const void*, size_t);

#include "rendPlatform.hpp"

static const unsigned kRepetitions = 1000000;
static const unsigned kPoolSize = 1 << 19; // 512 KB

const unsigned kAlignBoundary = 16;

static inline void*
malloc_align(
    const unsigned size,
    const unsigned align,
    void*& raw_ptr)
{
#if 1

    static char poolA[kPoolSize * 2] __attribute__ ((aligned (4096)));
    static bool odd = true;

    odd = !odd;

    char* ret;

    if (odd)
        ret = poolA + size;
    else
        ret = poolA;

    return ret;

#else

    const unsigned align_pad = align - 1;

    raw_ptr = malloc(size + align_pad);

    return reinterpret_cast< void* > (((unsigned) raw_ptr + align_pad) & ~align_pad);

#endif
}

int main(int argc, char * const argv[])
{
    double freq = rend::timer_freq();

    for (unsigned kb = 1; kb <= (kPoolSize >> 10); kb *= 2)
    {
        const unsigned chunk_size = kb << 10;

        void *rawA, *chunkA = malloc_align(chunk_size, kAlignBoundary, rawA);
        void *rawB, *chunkB = malloc_align(chunk_size, kAlignBoundary, rawB);

        unsigned r = kRepetitions;
        const unsigned long long t0 = rend::timer();

        do
        {
            memcpy_neon(chunkB, chunkA, chunk_size);
        }
        while (--r);

        const unsigned long long t1 = rend::timer();
        const unsigned long long dt = t1 - t0;

        const double s = double(dt) / freq;

#if 0
        free(rawA);
        free(rawB);
#endif

        printf("time: %f s, repetitions: %u, chunk: %u KB, bandwidth: %.2f KB/s\n", s, kRepetitions, kb, (double) kb * kRepetitions / s);
    }

    return 0;
}

Code:
$ LD_LIBRARY_PATH=../../root/really_tiny_libc ./a.out 
time: 1.120653 s, repetitions: 1000000, chunk: 1 KB, bandwidth: 892336.68 KB/s
time: 2.390252 s, repetitions: 1000000, chunk: 2 KB, bandwidth: 836732.03 KB/s
time: 4.906403 s, repetitions: 1000000, chunk: 4 KB, bandwidth: 815261.26 KB/s
time: 10.011309 s, repetitions: 1000000, chunk: 8 KB, bandwidth: 799096.31 KB/s
time: 20.263768 s, repetitions: 1000000, chunk: 16 KB, bandwidth: 789586.61 KB/s
time: 40.572274 s, repetitions: 1000000, chunk: 32 KB, bandwidth: 788715.97 KB/s
time: 82.148327 s, repetitions: 1000000, chunk: 64 KB, bandwidth: 779078.56 KB/s
time: 164.651225 s, repetitions: 1000000, chunk: 128 KB, bandwidth: 777400.84 KB/s
time: 393.429336 s, repetitions: 1000000, chunk: 256 KB, bandwidth: 650688.64 KB/s
time: 1081.692833 s, repetitions: 1000000, chunk: 512 KB, bandwidth: 473332.16 KB/s

mem access is over *two* of those chunks (source and destination), subsequent in memory, and the resulting BW is r+w, so for most comparison purposes (i.e. Exophase's) you'd need to 2x it.


pocak said:
The i.MX51 only supports a CAS latency of 3 cycles, so there's no corresponding reg.
I remember not seeing CAS timing control anywhere among the DDR timing regs, but I did not explicitly search for it. Today a '^f CAS' in the RM revealed the simple truth. Duh.
 
Last edited by a moderator:
There is one sort of issue you might be running into re: L1 cache.. on Cortex-A8 L1 d-cache doesn't support write-allocation, while L2 can be configured as write-allocate, depending on the MMU settings. So when you memcpy from one array to another, if the other array is not in L1 cache you won't get the benefit of storing to L1, and will instead be saddled by whatever the cost of storing to L2 is.

It's possible in your case that the second array never ends up in L1, but even if it does it might not stay there for long. The L1 cache is physically tagged so it should survive an address space change, but it doesn't really take that much work (or a deliberate cache flush instruction somewhere) to eat through it. So it could be dependent on what's going on in the background at the time, even if CPU utilization is very low.

To keep things in L1 I swapped the read and write pointers per iteration, so if things got bumped out of cache the next iteration would reallocate it.
 
Exophase said:
There is one sort of issue you might be running into re: L1 cache.. on Cortex-A8 L1 d-cache doesn't support write-allocation, while L2 can be configured as write-allocate, depending on the MMU settings. So when you memcpy from one array to another, if the other array is not in L1 cache you won't get the benefit of storing to L1, and will instead be saddled by whatever the cost of storing to L2 is.
You hit the nail on the head. A simple 'pld [dst]' inserted at the start of the main loop (chunks of 64 bytes) of Android's memcpy resulted in 4,582 MB/s (2x duplex BW) at 32KB arrays. That proves that the lack of write-allocate in L1D was indeed dictating the overall r/w performance in what I was seeing earlier.
 
Last edited by a moderator:
After reading this thread, I benchmarked a memcmp that I have been using. This is part of the dynarec and checks if a previously compiled block is unmodified. The relevant portion is:
Code:
        /* r1 = source */
        /* r2 = target */
        /* r3 = length */
        tst     r3, #4
        mov     r4, #0
        add     r3, r1, r3
        mov     r5, #0
        ldrne   r4, [r1], #4
        mov     r12, #0
        ldrne   r5, [r2], #4
        teq     r1, r3
        beq     .D3
.D2:
        ldr     r7, [r1], #4
        eor     r9, r4, r5
        ldr     r8, [r2], #4
        orrs    r9, r9, r12
        bne     .D4
        ldr     r4, [r1], #4
        eor     r12, r7, r8
        ldr     r5, [r2], #4
        cmp     r1, r3
        bcc     .D2
        teq     r7, r8
.D3:
        teqeq   r4, r5
.D4:

Doing this repeatedly with a 1K block size, at 500MHz, the speed is 1242MB/sec if the loop is aligned, and 1058MB/sec if it is misaligned (starts on an address not divisible by eight). This suggests that the performance is limited by the speed of the instruction fetch. Unrolling the aligned loop, so that it is twenty instructions long instead of ten, results in 1342MB/sec. There appears to be a one-cycle stall for a taken branch that is correctly predicted.
 
Ari64 said:
There appears to be a one-cycle stall for a taken branch that is correctly predicted.

I was seeing something like that too. wtf ARM. Contrast with ARM11 and Cortex-A9 where predicted direct branches are folded right off the instruction stream.
 
Last edited by a moderator:
Exophase said:
I was seeing something like that too. wtf ARM. Contrast with ARM11 and Cortex-A9 where predicted direct branches are folded right off the instruction stream.
The branch predictor appears to operate directly on the data coming out of the i-cache. If the branch is predicted taken, then the target is looked up in the BTB. Presumably the BTB lookup takes one cycle, so there's a delay before the next pair of instructions is fetched.

It wouldn't surprise me if A9 has the same problem. The branches are removed from the instruction stream, but that happens later in the pipeline and won't avoid the stall fetching instructions from the L1 cache.

Anyway, the 10-instruction loop appears to execute in 6 cycles, and the 20-instruction loop in 11 cycles. If it's misaligned, there's one extra cycle for the additional instruction fetch.

Using 64K blocks to force L2 accesses, the best case is about 740MB/sec. That works out to an average L1 miss penalty of around 19-20 cycles (wtf?)
 
Last edited by a moderator:
Do you still get the one cycle stall if you update the dependent flags on a previous cycle? I know the pipeline is supposed to handle this and I'm pretty sure I saw it regardless, but I'd like to confirm this.

I wouldn't assume that BTB lookup would stall the icache fetches in Cortex-A9, even if it does in A8. The pipelines are quite different, the frontend in particular is a lot shorter.
 
Exophase said:
Do you still get the one cycle stall if you update the dependent flags on a previous cycle? I know the pipeline is supposed to handle this and I'm pretty sure I saw it regardless, but I'd like to confirm this.
I tried setting the flags earlier and saw no difference, at least not for correctly-predicted branches.

Exophase said:
I wouldn't assume that BTB lookup would stall the icache fetches in Cortex-A9, even if it does in A8. The pipelines are quite different, the frontend in particular is a lot shorter.
http://www.arm.com/files/downloads/Cortex-A9_Devcon_2007_Microarchitecture.pdf

The prefetch stages look pretty similar to the A8, except that the GHB/BTAC is now two pipeline stages instead of one, which would imply an even greater penalty. Since they're doing hardware unrolling of short loops, maybe they can avoid the penalty for things like memcpy.

The problem on the A8 isn't the latency of the branch history/target buffers, it's that they partially decode the instructions and use that info for branch prediction. This takes a cycle, so there is a one-cycle delay before the prediction can be made. So there's 8 bytes of data from the original instruction stream after the branch that get fetched from the cache and thrown away before the new address gets sent to the cache.

It would be theoretically possible to make a no-latency branch predictor that relied solely on the branch history. I can only speculate as to why they didn't do this, but my guess is that the false positive rate was too high. That is, due to hash collisions, instructions which were not branches were predicted as branches. So, to avoid the false positives, the Cortex-A8 does some minimal instruction decoding to identify instructions which are clearly not branches. These instructions will never be predicted as branches even if the GHB indicates otherwise.

The decoding is pretty minimal, and seems to look only at bits 20, 25, 26 and 27. This is enough to reject vfp/neon, stores to memory, flag-setting instructions, and ALU operations with immediate values.

Of course, the other way to solve this would be to store more address bits in the GHB so you don't get hash collisions, but that would take more silicon area.

I wonder if the A9 still has the problem where every pair of instructions shares an entry in the BTB. I had to put nops between branches to avoid this.
 
Last edited by a moderator:
I actually never saw those slides. Thanks, that's a much more detailed writeup than other presentations, including the design-reuse one. Kinda surprised they used a physically indexed data cache, the ITLB lookups must be pretty early in the pipeline.

It's pretty strange that they did the branch prediction this way in Cortex-A8, because the false positives themselves can be quickly detected (assuming the GHB entries are tagged with the PC of course) and then the pre-decoding can commence, instead of what they're doing. Are you sure this isn't something they changed for Cortex-A9? Otherwise I don't see how they can claim any < 1 cycle branches from folding. Or maybe it was just ARM11 claiming that.

(btw, for anyone who was wondering: http://infocenter.arm.com/help/topic/com.arm.doc.ddi0246e/Chdcjfia.html < L2 access time of 8 cycles.. I was worried it'd be higher latency since it was moved off the core and shared, guess not)
 
Exophase said:
I actually never saw those slides. Thanks, that's a much more detailed writeup than other presentations, including the design-reuse one. Kinda surprised they used a physically indexed data cache, the ITLB lookups must be pretty early in the pipeline.
It's a bit unusual, but not necessarily a bad design. They are using a 'Micro TLB' cache to quickly handle frequently-used mappings. It might add one cycle to the load pipeline. The overhead from something like this typically isn't going to be more than a few percent in the average case. I wonder what they save by doing this though, VIPT isn't that bad. Task switching overhead, maybe?

Exophase said:
It's pretty strange that they did the branch prediction this way in Cortex-A8, because the false positives themselves can be quickly detected (assuming the GHB entries are tagged with the PC of course) and then the pre-decoding can commence, instead of what they're doing.
Tagging the entries would mean having 5-10 bits of the address in the GHB. That's a lot more silicon area than a 2-bit saturating counter. According to this, "The GHB is indexed by 10-bit history of the direction of the last ten branches encountered and 4 bits of the PC."

Exophase said:
Are you sure this isn't something they changed for Cortex-A9? Otherwise I don't see how they can claim any < 1 cycle branches from folding. Or maybe it was just ARM11 claiming that.
I don't know much about the A9's branch predictor, but they don't claim to have made any significant improvements beyond what they claimed for the A8. The branch folding just means that the branch instruction does not get issued to an execution unit. The instructions still get fetched and decoded, and the A9 is still limited to fetching two instructions (or four thumb instructions) per clock.

Exophase said:
(btw, for anyone who was wondering: http://infocenter.arm.com/help/topic/com.arm.doc.ddi0246e/Chdcjfia.html < L2 access time of 8 cycles.. I was worried it'd be higher latency since it was moved off the core and shared, guess not)
I'll believe it when I see it. They claimed 8 cycles for the A8 also, but the average L1 miss penalty is more than twice that.
 
Last edited by a moderator:
Ari64 said:
It's a bit unusual, but not necessarily a bad design. They are using a 'Micro TLB' cache to quickly handle frequently-used mappings. It might add one cycle to the load pipeline. The overhead from something like this typically isn't going to be more than a few percent in the average case. I wonder what they save by doing this though, VIPT isn't that bad. Task switching overhead, maybe?

I don't know, the only gain I can think of over VIPT is if different processes share physical memory which probably isn't all that common.. you shouldn't need a TLB flush on task switches since the differing physical tags would still catch varying virtual mappings.

Ari64 said:
Tagging the entries would mean having 5-10 bits of the address in the GHB. That's a lot more silicon area than a 2-bit saturating counter. According to this, "The GHB is indexed by 10-bit history of the direction of the last ten branches encountered and 4 bits of the PC."

Ah okay, sorry, I meant going by the BTB tag (which of course it would be), a non-branch shouldn't be in the BTB so on a BTB hit there's no need to fetch the instruction.

I can't help but wonder if they should have used more PC bits and less history bits on that, I guess they must have come to a sweet spot of 10 somehow.

Ari64 said:
I don't know much about the A9's branch predictor, but they don't claim to have made any significant improvements beyond what they claimed for the A8. The branch folding just means that the branch instruction does not get issued to an execution unit. The instructions still get fetched and decoded, and the A9 is still limited to fetching two instructions (or four thumb instructions) per clock.

As said above, if you circumvent fetching the instruction via the BTB it should be possible to avoid that. But I guess if the BTB latency means that you don't get the BTB entry until after you've gotten the cache entry anyway...

It's just that the BTB should be quicker than the cache, on account of being a lot smaller and more narrow, IIRC the associativity is similar. Maybe they'd benefit from an L0 micro-BTB too... their BTB is a lot larger than Atom's and they could be paying for it here.

Ari64 said:
I'll believe it when I see it. They claimed 8 cycles for the A8 also, but the average L1 miss penalty is more than twice that.

Yeah, I don't know what the deal is here, but I think there has to be some explanation for at least some of it. It'd be great if we had trace capability, or at least the kind that logs cycles and bus activity.. I have that on a CPU at work and it makes performance analysis and optimization much easier.

I believe Laurent once told me that earlier Cortex-A8s, ie revisions including the ones on the OMAP35x, don't support critical word first loading from L2. That shouldn't be a problem with sequential access like you're doing since the critical word should be the first one, but if somehow it's something even worse and you're eating the latency of the full cache miss and not just the first element, that could explain things better. If the bus width to L2 is 64-bits that'd mean +8 cycles.

You should also try ldm and see if you can leverage anything better by doing more explicit bursts to the L2. We already know NEON has better sustainable bandwidth, by a pretty wide margin.

Here are some other things we should look at..

http://infocenter.ar...ch04s02s03.html

You can select between 1 and 2 cycles for L2 data reads and the default is 2 cycles - this could refer to bandwidth. You can also read back the tag RAM and data RAM latency, so it can be verified if it's really 8 cycles - it's possible that TI went with slower RAM macros on this and it won't work at 8 cycles. It looks like all of these values can be changed.

Unfortunately you need to be nonsecure privileged to read and secure privileged to write. So we may need to whip out a kernel module. I also don't know if the settings will be the same on BeagleBoard and Pandora, although it's pretty likely.

I can't look into this stuff right now so if you have (or anyone else reading has) any inclination to do so please be my guest, otherwise I might be able to grab it later today, although I'm kinda busy with other coding things :<
 
Last edited by a moderator:
Exophase said:
You can select between 1 and 2 cycles for L2 data reads and the default is 2 cycles - this could refer to bandwidth. You can also read back the tag RAM and data RAM latency, so it can be verified if it's really 8 cycles - it's possible that TI went with slower RAM macros on this and it won't work at 8 cycles. It looks like all of these values can be changed.
The Cache Auxiliary Control Register reads 08000042.
 
Last edited by a moderator:
Back
Top