zmatt
Active Member
In case they're useful, below are some notes I once took about the timings of NEON instructions on the Cortex-A8.
One thing that caused quite some headache before I realized it was that there's no forwarding between the integer and floating-point pipelines in the NEON unit. This may seem harmless until you realize "vmov" executes in the integer pipeline, which means that if you move a value around using vmov between floating-point operations then you better keep a lot of distance between those instructions or the dispatcher will do that for you! Alternatively you can try replacing vmov by some instruction that executes in the load/store/permute pipeline.
Anyway, I hope my notation makes sense, otherwise just ask.
[doublepost=1488473374,1488471303][/doublepost]It's maybe worth mentioning that I discovered the vmov issue thanks to this remark:
Although he's talking about the integer core there, it immediately made me wonder exactly what "non-key" forwarding paths are not supported by the processor. When I was having weird inconsistencies in my attempts to establish timings of neon floating-point instructions I remembered this remark, and was able to empirically determine that missing forwarding paths were the root cause of the strange results I was getting.
Unfortunately I don't seem to have recorded exactly which combinations I tested. Presumably my thinking was that instead of error-prone manual testing, this would better be established by some automated exhaustive testing of which forwarding paths exist exactly, but of course I never got around to implementing this.
The paper above is highly recommended reading btw for anyone who wants to try to understand what's going on inside the cortex-A8.
All testing was done on an AM335x (Cortex-A8 r3p2). The DM37xx has the same cortex-A8 revision afaik hence the same timings should apply. Hopefully the older core revisions on the OMAP35xx aren't too different.
One thing that caused quite some headache before I realized it was that there's no forwarding between the integer and floating-point pipelines in the NEON unit. This may seem harmless until you realize "vmov" executes in the integer pipeline, which means that if you move a value around using vmov between floating-point operations then you better keep a lot of distance between those instructions or the dispatcher will do that for you! Alternatively you can try replacing vmov by some instruction that executes in the load/store/permute pipeline.
Anyway, I hope my notation makes sense, otherwise just ask.
Code:
================ NEON ==========================================================
---------- Permute instructions (LSP pipeline) ---------------------------------
VFP seems able to start a cycle sooner if waiting on LSP than on arithmetic?
vmovn Dd,Qm
vdup Qd,Dm[x]
vdup Dd,Dm[x]
vrev Dd,Dm
vrev Qd,Qm
m - >d
vtrn Dd,Dm
vswp Dd,Dm
vzip Dd,Dm
vuzp Dd,Dm
m,d - >m,>d
vtrn Qd,Qm
vswp Qd,Qm
ml,dl - >ml,>dl
mh,dh - >mh,>dh
vzip Qd,Qm
ml,dl - -
mh,dh - >d
- - >m
vuzp Qd,Qm
d - -
m - >ml,>dl
- - >mh,>dh
vext Dd,Dn,Dm,#
n,m - >d
vext Qd,Qn,Qm,#
n - -
m - >d
vtbl|vtbx Dd,{Dn},Dm
vtbl|vtbx Dd,{Dn,Dn+1},Dm
- m,d - (d only source for vtbx)
n,n+1 - >d
vtbl|vtbx Dd,{Dn,Dn+1,Dn+2},Dm
vtbl|vtbx Dd,{Dn,Dn+1,Dn+2,Dn+3},Dm
- m,d - (d only source for vtbx)
n,n+1 - -
n+2,n+3 - >d
--------------------------------------------------------------------------------
vadd|vand|vorr|veor|vbic|vorn Qd,Qn,Qm
- n,m - >d
vneg Qd,Qm
vshr|vshl Qd,Qm,#
vshr Dd,Qm,#
vshl Qd,Dd,#
vmvh Qd,Dd
m - - >d
vsub Qd,Qn,Qm
vadd|vsub Qd,Qn,Dm (wide)
m n - >d
vadd|vsub Qd,Dn,Dm (long)
n,m - - >d
vsli|vsri Dd,Dm,#
n,d - - >d
vsli|vsri Qd,Qm,#
nl,dl - - >dl
nh,dh - - >dh
vqshl|vrshr Qd,Qm,#
vqshr|vrshr|vqrshr|vqmov Dd,Qm,#
m - - - >d
v(r|rh|q)add|vtst Qd,Qn,Qm
v{r}adh Dd,Qn,Qm
- n,m - - >d
v{r}sbh Dd,Qn,Qm
mh n,ml - - >d
v(h|q)sub Qd,Qn,Qm
vabd|vc(eq|ge|gt)|vmax|vmin Qd,Qn,Qm
vfmx|vfmn Dd,Dn,Dm
m n - - >d
vector integer multiply / multiply-accumulate
d only source operand for accumulate
accumulator has special forwarding if type and size matches
Q,D,D ("long") versions are the most powerful; normal D,D,D aren't faster.
accumulation has no extra cost.
(8×8→8)[8], (8×8→16)[8], (16×16→16)[4], (16×16→32)[4]
- n,m d - - - >d
scalar needed one cycle earlier to dup it
16[4]×16→16[4], 16[4]×16→32[4]
m n d - - - >d
32-bit is implemented as two sets of 16×32 multiplies; second operand is needed
one cycle earlier to dup it appropriately (for both scalar and vector).
(32×32→32)[2], (32×32→64)[2], 32[2]×32→32[2], 32[2]×32→64[2]
m n d - - - -
- - - - - - >d
The Q,Q,(Q/scalar) versions behave like back-to-back D,D,(D/scalar) ones.
vsra|vrsra Qd,Qm,#
m - d - - - >d
--------------------------------------------------------------------------------
no forwarding between integer and floating-point? beware of "vmov" !
vector float, most ops, D,D,D
- n,m - - - >d
scalar multiply again needs second operand early
m n - - - >d
vp(add|min|max).f need to transpose their data
n,m - - - - >d
multiply-accumulate
- n,m - - d - - - - - >d
d - - - - - >d
Again, the Q,Q,Q/scalar versions behave like back-to-back D,D,D/scalar ones.
--------------------------------------------------------------------------------
assumes alignment specifier equal to size of load or @128
(@256 has no added value)
lesser alignment spec prepends 1 cycle
3-reg behaves like misaligned 4-reg
vld1 1- or 2-reg
- >d
vld1 4-reg
- >d0,d1
- >d2,d3
vld2 2-reg
vld1|vld2 all lanes
- - >d
vld2 4-reg
- - >d0,d2
- - >d1,d3
vld4 all lanes
- - >d0,d1
- - >d2,d3
vld4 4-reg
- - -
- - >d0,d1
- - >d2,d3
"to one lane" prepends separate read-cycle(s)
vld1|vld2 to one lane, aligned
d - -
- - >d
vld1|vld2 to one lane, misaligned
d - -
- - -
- - >d
vld4 to one lane, aligned
d0,d1 - -
d2,d3 - -
- - >d0,d1
- - >d2,d3
vld4 to one lane, misaligned
d0,d1 - -
d2,d3 - -
- - -
- - >d0,d1
- - >d2,d3
One of the most important features in Cortex A8 for efficient execution of code is the extensive support of key forwarding paths.
— ARM Cortex A8: A High Performance Processor for Low Power Applications, David Williamson
— ARM Cortex A8: A High Performance Processor for Low Power Applications, David Williamson
Although he's talking about the integer core there, it immediately made me wonder exactly what "non-key" forwarding paths are not supported by the processor. When I was having weird inconsistencies in my attempts to establish timings of neon floating-point instructions I remembered this remark, and was able to empirically determine that missing forwarding paths were the root cause of the strange results I was getting.
Unfortunately I don't seem to have recorded exactly which combinations I tested. Presumably my thinking was that instead of error-prone manual testing, this would better be established by some automated exhaustive testing of which forwarding paths exist exactly, but of course I never got around to implementing this.
The paper above is highly recommended reading btw for anyone who wants to try to understand what's going on inside the cortex-A8.
All testing was done on an AM335x (Cortex-A8 r3p2). The DM37xx has the same cortex-A8 revision afaik hence the same timings should apply. Hopefully the older core revisions on the OMAP35xx aren't too different.