1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
  2. Dismiss Notice

Pandora Cortex-A8 NEON Timings

Discussion in 'NEON / DSP' started by zmatt, Mar 2, 2017.

  1. zmatt

    zmatt Member

    Joined:
    Oct 31, 2015
    Messages:
    65
    Location:
    Netherlands
    In case they're useful, below are some notes I once took about the timings of NEON instructions on the Cortex-A8.

    One thing that caused quite some headache before I realized it was that there's no forwarding between the integer and floating-point pipelines in the NEON unit. This may seem harmless until you realize "vmov" executes in the integer pipeline, which means that if you move a value around using vmov between floating-point operations then you better keep a lot of distance between those instructions or the dispatcher will do that for you! Alternatively you can try replacing vmov by some instruction that executes in the load/store/permute pipeline.

    Anyway, I hope my notation makes sense, otherwise just ask.

    Code:
    ================ NEON ==========================================================
    
    ---------- Permute instructions (LSP pipeline) ---------------------------------
    
    VFP seems able to start a cycle sooner if waiting on LSP than on arithmetic?
    
    vmovn  Dd,Qm
    vdup  Qd,Dm[x]
    vdup  Dd,Dm[x]
    vrev  Dd,Dm
    vrev  Qd,Qm
            m       -       >d
    
    vtrn  Dd,Dm
    vswp  Dd,Dm
    vzip  Dd,Dm
    vuzp  Dd,Dm
            m,d     -       >m,>d
    
    vtrn  Qd,Qm
    vswp  Qd,Qm
            ml,dl   -       >ml,>dl
                    mh,dh   -       >mh,>dh
    
    vzip  Qd,Qm
            ml,dl   -       -
                    mh,dh   -       >d
                            -       -       >m
    
    vuzp  Qd,Qm
            d       -       -
                    m       -       >ml,>dl
                            -       -       >mh,>dh
    
    vext Dd,Dn,Dm,#
            n,m     -       >d
    
    vext Qd,Qn,Qm,#
            n       -       -
                    m       -       >d
    
    vtbl|vtbx Dd,{Dn},Dm
    vtbl|vtbx Dd,{Dn,Dn+1},Dm
            -       m,d     -               (d only source for vtbx)
                    n,n+1   -       >d
    
    vtbl|vtbx Dd,{Dn,Dn+1,Dn+2},Dm
    vtbl|vtbx Dd,{Dn,Dn+1,Dn+2,Dn+3},Dm
            -       m,d     -               (d only source for vtbx)
                    n,n+1   -       -
                            n+2,n+3 -       >d
    
    
    --------------------------------------------------------------------------------
    
    vadd|vand|vorr|veor|vbic|vorn  Qd,Qn,Qm
            -       n,m     -       >d
    
    vneg  Qd,Qm
    vshr|vshl  Qd,Qm,#
    vshr  Dd,Qm,#
    vshl  Qd,Dd,#
    vmvh  Qd,Dd
            m       -       -       >d
    
    vsub  Qd,Qn,Qm
    vadd|vsub  Qd,Qn,Dm  (wide)
            m       n       -       >d
    
    vadd|vsub  Qd,Dn,Dm  (long)
            n,m     -       -       >d
    
    vsli|vsri  Dd,Dm,#
            n,d     -       -       >d
    
    vsli|vsri  Qd,Qm,#
            nl,dl   -       -       >dl
                    nh,dh   -       -       >dh
    
    vqshl|vrshr  Qd,Qm,#
    vqshr|vrshr|vqrshr|vqmov  Dd,Qm,#
            m       -       -       -       >d
    
    v(r|rh|q)add|vtst  Qd,Qn,Qm
    v{r}adh  Dd,Qn,Qm
            -       n,m     -       -       >d
    
    v{r}sbh  Dd,Qn,Qm
            mh      n,ml    -       -       >d
    
    v(h|q)sub  Qd,Qn,Qm
    vabd|vc(eq|ge|gt)|vmax|vmin  Qd,Qn,Qm
    vfmx|vfmn  Dd,Dn,Dm
            m       n       -       -       >d
    
    
    
    vector integer multiply / multiply-accumulate
    d only source operand for accumulate
    accumulator has special forwarding if type and size matches
    
    Q,D,D ("long") versions are the most powerful; normal D,D,D aren't faster.
    accumulation has no extra cost.
    
    (8×8→8)[8], (8×8→16)[8], (16×16→16)[4], (16×16→32)[4]
            -       n,m     d       -       -       -       >d
    
    scalar needed one cycle earlier to dup it
    
    16[4]×16→16[4], 16[4]×16→32[4]
            m       n       d       -       -       -       >d
    
    32-bit is implemented as two sets of 16×32 multiplies; second operand is needed
    one cycle earlier to dup it appropriately (for both scalar and vector).
    
    (32×32→32)[2], (32×32→64)[2], 32[2]×32→32[2], 32[2]×32→64[2]
            m       n       d       -       -       -       -
                    -       -       -       -       -       -       >d
    
    The Q,Q,(Q/scalar) versions behave like back-to-back D,D,(D/scalar) ones.
    
    vsra|vrsra  Qd,Qm,#
            m       -       d       -       -       -       >d
    
    
    
    --------------------------------------------------------------------------------
    
    no forwarding between integer and floating-point?  beware of "vmov" !
    
    
    vector float, most ops, D,D,D
            -       n,m     -       -       -       >d
    
    scalar multiply again needs second operand early
            m       n       -       -       -       >d
    
    vp(add|min|max).f need to transpose their data
            n,m     -       -       -       -       >d
    
    multiply-accumulate
            -       n,m     -       -       d       -       -       -       -       -       >d
                                            d       -       -       -       -       -       >d
    
    Again, the Q,Q,Q/scalar versions behave like back-to-back D,D,D/scalar ones.
    
    
    --------------------------------------------------------------------------------
    
    assumes alignment specifier equal to size of load or @128
    (@256 has no added value)
    lesser alignment spec prepends 1 cycle
    3-reg behaves like misaligned 4-reg
    
    vld1  1- or 2-reg
            -       >d
    
    vld1  4-reg
            -       >d0,d1
                    -       >d2,d3
    
    vld2  2-reg
    vld1|vld2  all lanes
            -       -       >d
    
    vld2  4-reg
            -       -       >d0,d2
                    -       -       >d1,d3
    
    vld4  all lanes
            -       -       >d0,d1
                    -       -       >d2,d3
    
    vld4  4-reg
            -       -       -
                    -       -       >d0,d1
                            -       -       >d2,d3
    
    
    "to one lane" prepends separate read-cycle(s)
    
    vld1|vld2  to one lane, aligned
            d       -       -
                    -       -       >d
    
    vld1|vld2  to one lane, misaligned
            d       -       -
                    -       -       -
                            -       -       >d
    
    vld4  to one lane, aligned
            d0,d1   -       -
                    d2,d3   -       -
                            -       -       >d0,d1
                                    -       -       >d2,d3
    
    vld4  to one lane, misaligned
            d0,d1   -       -
                    d2,d3   -       -
                            -       -       -
                                    -       -       >d0,d1
                                            -       -       >d2,d3
    
    --- Double Post Merged, Mar 2, 2017, Original Post Date: Mar 2, 2017 ---
    It's maybe worth mentioning that I discovered the vmov issue thanks to this remark:

    One of the most important features in Cortex A8 for efficient execution of code is the extensive support of key forwarding paths.
    ARM Cortex A8: A High Performance Processor for Low Power Applications, David Williamson​

    Although he's talking about the integer core there, it immediately made me wonder exactly what "non-key" forwarding paths are not supported by the processor. When I was having weird inconsistencies in my attempts to establish timings of neon floating-point instructions I remembered this remark, and was able to empirically determine that missing forwarding paths were the root cause of the strange results I was getting.

    Unfortunately I don't seem to have recorded exactly which combinations I tested. Presumably my thinking was that instead of error-prone manual testing, this would better be established by some automated exhaustive testing of which forwarding paths exist exactly, but of course I never got around to implementing this.

    The paper above is highly recommended reading btw for anyone who wants to try to understand what's going on inside the cortex-A8.

    All testing was done on an AM335x (Cortex-A8 r3p2). The DM37xx has the same cortex-A8 revision afaik hence the same timings should apply. Hopefully the older core revisions on the OMAP35xx aren't too different.
     
    Tags:
    _jr_, ingoreis and pocak like this.
  2. notaz

    notaz Certified Guru

    Joined:
    Aug 23, 2005
    Messages:
    4,845
    Location:
    Lithuania
    _jr_ and pocak like this.
  3. zmatt

    zmatt Member

    Joined:
    Oct 31, 2015
    Messages:
    65
    Location:
    Netherlands
    It's useful, but it's still just a simulator based on someone's understanding of the A8. In particular, it doesn't seem to be aware of the lack of forwarding between integer and floating-point neon pipelines. For example, if you run these four instructions in a loop:
    Code:
    vmov     d1, d0
    vadd.f32 d2, d1, d1
    vmov     d3, d2
    vadd.f32 d0, d3, d3
    Then pulsar thinks it should take 12 cycles per iteration. It lets the vadd.f32 issue one cycle after the vmov, and then four stall cycles are inserted before the next vmov to let the vadd.f32 complete.

    In reality however this loop takes 28 cycles per iteration, i.e. 7 cycles per instruction, which means each instruction has to retire from the pipeline before the next one can be issued. This is the result of data dependency and lack of forwarding path.

    Using vmul.f32 instead of vadd.f32 yields the same results.

    I'm not sure what you're saying. vmov (neon-to-neon) is just a special case of vorr, hence executes in the integer ALU pipeline. If it's moving the result of an integer instruction, then the result will simply be forwarded as expected. If you replace vadd.f32 by vadd.i32 in the example above then pulsar correctly predicts 8 cycles per iteration, and it likewise correctly predicts 18 cycles per iteration when using vmul.i32.
    --- Double Post Merged, Mar 4, 2017, Original Post Date: Mar 4, 2017 ---
    Note BTW that you don't need a simulator to see what exactly the A8 is doing. You can just ask the A8 itself, i.e. configure ETM for cycle-accurate trace, and record a trace of the code snippet of interest into the ETB (you can start/stop trace with the "dbg" instruction).

    I still haven't gotten around to playing with this myself, but I'm available for questions if anyone wants to try this. (I also noticed there are linux drivers for some CoreSight stuff, but haven't investigated them yet)
     
    pocak, _jr_ and levi like this.
  4. Exophase

    Exophase Nothing good will ever come of Exophase.

    Joined:
    Sep 21, 2006
    Messages:
    10,284
    Location:
    Cleveland OH
    I think most of this can be inferred from the pipeline stage descriptions in the TRM (DDI0344C section 16.6), although there may be some errors. In particular, some "preprocessing" operations (like vmovn) take inputs at N1 and produce outputs at N2, while normal ALU operations take inputs at N2 and produce outputs at N3 or later, so the staggering allows you to put these back to back like you discovered. Some permute instructions can also dual issue, although there are odd limitations with dual issuing.

    Going from NEON int to FP can be a problem but it's even worse if you're going from VFP to NEON, probably because of the former's lack of pipelining. If you do a long latency VFP instruction like you can't do any other VFP or NEON instructions at all in its shadow. If you can get a lot of divisions queued instead up you're a lot better using NEON Newton-Raphson iterations than fdivs, unless you can perform meaningful scalar work in parallel or you absolutely need double precision (over 60 cycles for this one...)

    These are some rambling notes I kept years ago, can't say for sure if it's all 100% accurate though:

    Code:
    - General
      - vmul/vmla family to the same destination are the only instructions that
        can be issued back to back (most integer instructions have at least 2
        cycles of latency). A vmla can perform a multiplication, addition, and a
        widening, so sometimes it's useful to use them where other instructions
        would normally be used to perform a subset of these tasks. For instance,
        it's faster to chain vmla then vsli instructions, where a multiply is
        used instead of a shift.
      - vext can be used to perform a byte rotation by using the same source and
        destination.
    
    - Dual issuing
      - You can dual issue a load and use its value the same cycle due to the way
        the pipeline is staggered, this can save register pressure
      - You can dual issue a dup to save register pressure too, although you can't
        use its result the same cycle
      - vext can be used to dual issue a 64-bit move (use a parameter of #0)  
      - If you dual issue too much throughput seems to go down, regardless of what
        instructions. Surprisingly, you can end up with lower performance than
        if you used non-pairing instructions instead - you can sometimes improve
        performance by inserting nops! I haven't found precise information on this
        or why it's happening, but it's a useful thing to look out for in code
        (try counting cycles until you get mysterious delays, then start spreading
        out the pairs more)
      - Dual issuing anything with a multiply-accumulate instruction seems to mess 
        up its special accumulator-forwarding path, sticking you with the 5 cycle
        latency.
     
    - Loads and stores
      - Loads to single lanes appear to be much slower than the TRM suggests (~7
        cycles instead of 2). Avoid them if possible: a load to an ARM register plus
        an ARM to NEON move is faster.
        UPDATE: I think this only applies to 8-bit and 16-bit lane loads.
      - Stores don't seem to allocate in L2 cache, regardless of page settings. If
        you're writing to a small data structure several times that isn't in L2 it
        can help to "warm" it using ARM or NEON loads beforehand. Don't use pld;
        unless you slow down the loop a lot these won't always actually commit.
      - Loading the same region that has just been stored to causes a stall penalty
        of at least 20 cycles. This can happen unknowingly if you perform a read-
        modify-write sequence using an unaligned pointer because unaligned accesses
        are split into multiple aligned accesses that are larger than the requested
        size, and will therefore overlap over consecutive requests. One technique
        to avoid this is to keep the load pointer one ahead of the store pointer and
        use a register move from the next location to current location.
    
    vmul/vmla chains can be super useful because of the low latency, they can be heavily preferable to the insert instructions for example.

    Going from NEON to integer regs has a well known big performance penalty, and you can't hide anything in its shadow that uses scalar registers. Not sure if you can hide NEON-register only ops instead. I think you can hide nops and branches but that's not very useful. Generally you're a lot better off transferring through memory instead, where it really will only interlock when that particular cache line is read back.
     
    pocak, _jr_ and levi like this.

Share This Page

Loading...