Pandora Rethinking Floating Point On Pandora.


Exophase said:
The documents do not say outright, however if you look at the timing charts for NEON instructions you will see that the results are written when the relevant execution stage is ready, which varies per instruction. So the latency should only be the difference between the result stage for the register you want and the source stage for the instruction where you want to use it. For floating point operations that's usually N5 (as opposed to N6) and for integer it's quite sooner.

I imagine that the reason why VFP on NEON takes the full pipeline latency (probably always writes back at N6 and probably always reads at N1 if not sooner) is because of the conversion boilerplate going between the two and not because of general limitations in the NEON pipeline. The NEON pipe might not be capable of writing partial results (scalar instead of vector) or to non-aligned regions, which would require VFP operations to go to a temporary register so that the operation doesn't damage the register file.

you're right, after revisiting of the specs it seems the full NFP pipeline ride is a special service to VFP clients. i stand corrected.

Exophase said:
Personally, I think Laurent has been greatly exaggerating the cost here. A latency of 7 cycles instead of 6 cycles is not exactly earth shattering, or at least it's nothing compared to going to that from a throughput of 8-9+ cycles. The VFP11 that you have much praise for also only saved one latency cycle by forwarding. In fact, forwarding makes the latency for the main arithmetic instructions on VFP11 exactly the same as it is in runfast mode on NEON. 7-8 cycles is hardly anything praiseworthy.

unless i'm reading something wrong, the difference between scalar VFP11 and VFPv3_lite + NFP is quite seriously not in favor of the latter. forget latency differencies between the two (yes, they're comparable) - it's the throughput that kills the scalar fp on the A8: the documents clearly state 'execution time' for all fastrun VFPv3 delegations. no pipelining whatsoever - just an execution speed-up. in comparison, VFP11 murders them with its fully-pipelined design across its three largely-independent pipelines.

now, the situation entirely reverses when it comes to vector performance - neon clearly has to upper hand there, if nothing else for its modeless vector execution - VFP11 suffers badly there from serialization at mode changes, as was already noted in this thread, and then some (vector divisions stall bath the DS and FMAC pipes, etc).

Exophase said:
I think that a proper benchmark for runfast vs IEEE VFP needs to be done. Bytemark results are not valid because they use double precision. All we've seen is the improvement in Quake 2, which may not be as floating point dependent as we've been led to believe (or else I don't think Wiz would be running it at even the speed it does).
i, too, don't think q2 is a good fpu benchmark, not in its software-rasterized form anyway. there the integer rasterization should be the single most tight spot in the rendition loop. unless i'm thinking of Q1 - it's been years since i looked at them closely.
 
Last edited by a moderator:
darkblu said:
unless i'm reading something wrong, the difference between scalar VFP11 and VFPv3_lite + NFP is quite seriously not in favor of the latter. forget latency differencies between the two (yes, they're comparable) - it's the throughput that kills the scalar fp on the A8: the documents clearly state 'execution time' for all fastrun VFPv3 delegations. no pipelining whatsoever - just an execution speed-up. in comparison, VFP11 murders them with its fully-pipelined design across its three largely-independent pipelines.
The documentation doesn't say that VFPU instructions executing in the NEON unit are not pipelined, it says that they take 7 cycles to execute because of lack of register forwarding. Why would a lack of register forwarding to subsequent instructions block any further instructions from executing, regardless of whether or not they're accessing the registers being written to? I think that execution time in this context refers to latency, not throughput - in fact unless there's good reason to believe otherwise that's usually what it will refer to.
 
Last edited by a moderator:
Exophase said:
darkblu said:
unless i'm reading something wrong, the difference between scalar VFP11 and VFPv3_lite + NFP is quite seriously not in favor of the latter. forget latency differencies between the two (yes, they're comparable) - it's the throughput that kills the scalar fp on the A8: the documents clearly state 'execution time' for all fastrun VFPv3 delegations. no pipelining whatsoever - just an execution speed-up. in comparison, VFP11 murders them with its fully-pipelined design across its three largely-independent pipelines.
I'm talking about VFP in runfast mode, or at least the common single precision operations that run on the NEON unit. It is however still VFP code that is being executed and it is definitely not non-pipelined (how can it run on the NEON unit but not be pipelined?) The throughput is the same as NEON although the latency is fixed at 7 cycles. Why even say that it has 7 cycle latency if the throughput is worse than that? It makes no sense.

correct me if i'm wrong, but nowhere does the spec mention throughput (i.e. issue rate) of runfast VFPv3 code - the specs talk strictly about 'execution time', i.e. latency = throughput. so fastrun does not 'see' the NFP pipeline as such - it sees it as an execution unit/compact stage. see 'Example 16-8 Instruction sequence for VFP pipeline' on page 16-50 in the cortex-A8 TRM.

ed: bah, that sample code in the TRM is dependencies-ridden. bad example.
 
Last edited by a moderator:
They must have added page 16-50 at some point since it wasn't in my TRM copy :/ I wish ARM wouldn't play these word games because I don't think that they clearly demonstrated how things are at all. The cycle counts they listed in the example speak for themselves though (if it really is running in runfast mode)

The second FMACS shouldn't be dependent on the first one, unless there's a write after write stall.
 
Exophase said:
They must have added page 16-50 at some point since it wasn't in my TRM copy :/ I wish ARM wouldn't play these word games because I don't think that they clearly demonstrated how things are at all. The cycle counts they listed in the example speak for themselves though (if it really is running in runfast mode)

You mean you don't carefully read what I write? :blink: I posted the link in that very same thread ;) http://www.gp32x.de/board/index.php?s=&am...st&p=696740
Basically runfast mode is not very useful, the only high speed option are NEON vector instructions.
 
Last edited by a moderator:
Exophase said:
They must have added page 16-50 at some point since it wasn't in my TRM copy :/ I wish ARM wouldn't play these word games because I don't think that they clearly demonstrated how things are at all. The cycle counts they listed in the example speak for themselves though (if it really is running in runfast mode)

The second FMACS shouldn't be dependent on the first one, unless there's a write after write stall.
you're right - unless there's some quirky interlock, that second fmacs should not get serialized as it does in the example. it appears my original reading was right.

@laurent: the TRM pdf is slightly more elaborate (alas not entirely clear) than the online docs (yes, i know the pdf is linked on that page your referred : ) IMO, runfast does help notably with some classes of VFP ops, but in general the non-pipelineability of the unit cripples it badly. VFP11 is much better for scalars.
 
Last edited by a moderator:
darkblu said:
@laurent: the TRM pdf is slightly more elaborate (alas not entirely clear) than the online docs (yes, i know the pdf is linked on that page your referred : ) IMO, runfast does help notably with some classes of VFP ops, but in general the non-pipelineability of the unit cripples it badly. VFP11 is much better for scalars.
Oh I see what you mean. They claim 7 cycles for all runfast mode instructions, which would include instructions that are much slower in standard VFP such as F*MAC. I should have drunk more coffee before my previous answer :p Apologies to Exophase :)
 
Last edited by a moderator:
Laurent said:
Exophase said:
They must have added page 16-50 at some point since it wasn't in my TRM copy :/ I wish ARM wouldn't play these word games because I don't think that they clearly demonstrated how things are at all. The cycle counts they listed in the example speak for themselves though (if it really is running in runfast mode)

You mean you don't carefully read what I write? :blink: I posted the link in that very same thread ;) http://www.gp32x.de/board/index.php?s=&am...st&p=696740
Basically runfast mode is not very useful, the only high speed option are NEON vector instructions.


I did read what you wrote and saw the link, I just didn't realize that they added timing examples in the pages following it. Please understand that I found their wording of "execution time" to be ambiguous at best. I think the whole thing is really obfuscated (intentionally?); saying that it runs on the NEON pipeline merely w/o register forwarding certainly shouldn't suggest that it's not actually pipelining execution.

The "half-vector" mode would sacrifice half the register space but it's a small price to pay in comparison, IMO. Should definitely be available as an option in GCC if at all possible.
 
Last edited by a moderator:
Exophase said:
The "half-vector" mode would sacrifice half the register space but it's a small price to pay in comparison, IMO. Should definitely be available as an option in GCC if at all possible.
one potential problem with that half-vector mode that i see is that in a compiler context it can be rather tricky to re-inject the NFP output back into the ARM pipeline - from what i can tell it takes both pipelines' full latencies - i.e. the result getting to WB stage in NFP, followed by an entire ARM pipeline serialization - the compiler will likely fumble if left alone to reshuffle that; tackling the task efficiently would likely need high-level intervention (read: algorithmic accomodation).

Exophase said:
Please understand that I found their wording of "execution time" to be ambiguous at best. I think the whole thing is really obfuscated (intentionally?); saying that it runs on the NEON pipeline merely w/o register forwarding certainly shouldn't suggest that it's not actually pipelining execution.
i believe it's a matter of 'newness' - cortex's documentations needs more work till it reaches the expected level of comprehension. in comparison, v6 docs are elaborate and clear.
 
Last edited by a moderator:
darkblu said:
one potential problem with that half-vector mode that i see is that in a compiler context it can be rather tricky to re-inject the NFP output back into the ARM pipeline - from what i can tell it takes both pipelines' full latencies - i.e. the result getting to WB stage in NFP, followed by an entire ARM pipeline serialization - the compiler will likely fumble if left alone to reshuffle that; tackling the task efficiently would likely need high-level intervention (read: algorithmic accomodation).
For straightforward operations there are not necessarily reasons why floats should interact with integer instructions in C/C++ to begin with (please tell me otherwise, save for cases where performance would probably be compromised anyway), but I don't know what GCC's general strategy is for this. Although there could be stalls due to accessing the same regions of the stack for local variables between integer and NEON registers. This would apply to vectorized and intrinsic generated code as well.

darkblu said:
i believe it's a matter of 'newness' - cortex's documentations needs more work till it reaches the expected level of comprehension. in comparison, v6 docs are elaborate and clear.
It isn't very hard to document things in terms of latency and throughput like most things are, like VFP11 is, and Cortex-A8 isn't really very new anymore. I suspect that they're trying to downplay the poor performance of VFP in runfast mode, and that things are also becoming obfuscated due to ARM's move towards more non-disclosure policies.
 
Last edited by a moderator:
Exophase said:
For straightforward operations there are not necessarily reasons why floats should interact with integer instructions in C/C++ to begin with (please tell me otherwise, save for cases where performance would probably be compromised anyway), but I don't know what GCC's general strategy is for this. Although there could be stalls due to accessing the same regions of the stack for local variables between integer and NEON registers. This would apply to vectorized and intrinsic generated code as well.

no, not many reasons to intermix fp with int work on the same data. but still hypothetically, at some level of optimisation a compiler (or some deliberate code, for that matter) could go 'down and dirty', ie. integer, on some fp data for purposes like equality/inequality comparisons, abs values, exp/mantissa extraction, nan masking - stuff like that.

Exophase said:
It isn't very hard to document things in terms of latency and throughput like most things are, like VFP11 is, and Cortex-A8 isn't really very new anymore. I suspect that they're trying to downplay the poor performance of VFP in runfast mode, and that things are also becoming obfuscated due to ARM's move towards more non-disclosure policies.
a valid suspicion. and i, too, don't like what arm are currently doing with the whole non-disclosure shift. never quite understood who's hiding what and from whom, anyway.
 
Last edited by a moderator:
darkblu said:
no, not many reasons to intermix fp with int work on the same data. but still hypothetically, at some level of optimisation a compiler (or some deliberate code, for that matter) could go 'down and dirty', ie. integer, on some fp data for purposes like equality/inequality comparisons, abs values, exp/mantissa extraction, nan masking - stuff like that.
I'm not very well informed about GCC's gritty internals but hopefully these kinds of things are expressed as builtins that are handled in the arch specific backend rather than frontend optimizations. Cortex-A8 isn't the only arch that has a tangible penalty for moving between FP and integer register files, nor is it the only arch that has decent instructions for a lot of those things (as compared to the overhead of moving the data around, even on a platform where it's fast). Still, we can look at what it's doing now with VFP output.
 
Last edited by a moderator:
ok, i went a bit overboard i admit - a cc's backend will surely refrain from doing anything as stupid as stalling the pipeline deliberately. but somebody's "optimised" code might, in which case the compiler will likely fail to quite mitigate the damage.

*goes to search for any integer optimisations in own old fp code*
 
darkblu said:
ok, i went a bit overboard i admit - a cc's backend will surely refrain from doing anything as stupid as stalling the pipeline deliberately. but somebody's "optimised" code might, in which case the compiler will likely fail to quite mitigate the damage.

*goes to search for any integer optimisations in own old fp code*
Funny you mention that ... I actually wrote code like that on iPhone, basically doing float ops and then simply casting results to shorts and storing them in my custom vertex buffer memory which was defined to take GL_SHORT for position and uv coordinates.

Gcc was generating code that was basically using FSITO and FMRS , followed by strh ...

This was done to optimize bandwidth usage and on that platform actually pays of big compared to simply pushing float based buffers ( mostly due to crappy OpenGL driver implementation which insists on copying data around on every drawElements call - VBOs or not )

My point is that this sort of code sometimes actually does make sense ..
 
Last edited by a moderator:
warmi said:
Funny you mention that ... I actually wrote code like that on iPhone, basically doing float ops and then simply casting results to shorts and storing them in my custom vertex buffer memory which was defined to take GL_SHORT for position and uv coordinates.

Gcc was generating code that was basically using FSITO and FMRS , followed by strh ...

This was done to optimize bandwidth usage and on that platform actually pays of big compared to simply pushing float based buffers ( mostly due to crappy OpenGL driver implementation which insists on copying data around on every drawElements call - VBOs or not )

My point is that this sort of code sometimes actually does make sense ..
of course. don't get me wrong - when i said "optimised" i also meant code that was just not optimised with cortex in mind, rather than plain misguided cortex code. i, too, have written integer code for some platforms that actually does nastier things to floats than just type-cast them : )

actually what you've done on the phone (i assume you meant FTOSI rather than FSITO) may not have been that costly, as VFP11, being properly pipelined, should not have had much of an issue with that particular sequence, with a bit of compiler help. as the FTOSI is carried by the FMAC, and the FMRS - by the LS pipeline, respectively (which two pipes have no natural hazards), a sequence of those two ops would suffice to get a minimal reorder in the form of an fixed offset to the FMRSs and it'd execute fairly well (FTOSI is t:1, l:8, and FMRS - t:1, l:4). the compiler would just need to be allowed to do loop strength reduction/unrolling (as surely there'd be a loop in there somewhere), and it's all good to go.

btw, speaking of iphone's vertex buffer shortcomings - i've heard it from others too, so i've been planning to investigate on that particular issue. that insistence on continual buffer copies/touches sounds like too dumb a way for the driver to kick itself where it hurts most. there should be something behind it. we'll see..
 
Last edited by a moderator:
darkblu said:
warmi said:
Funny you mention that ... I actually wrote code like that on iPhone, basically doing float ops and then simply casting results to shorts and storing them in my custom vertex buffer memory which was defined to take GL_SHORT for position and uv coordinates.

Gcc was generating code that was basically using FSITO and FMRS , followed by strh ...

This was done to optimize bandwidth usage and on that platform actually pays of big compared to simply pushing float based buffers ( mostly due to crappy OpenGL driver implementation which insists on copying data around on every drawElements call - VBOs or not )

My point is that this sort of code sometimes actually does make sense ..
of course. don't get me wrong - when i said "optimised" i also meant code that was just not optimised with cortex in mind, rather than plain misguided cortex code. i, too, have written integer code for some platforms that actually does nastier things to floats than just type-cast them : )

actually what you've done on the phone (i assume you meant FTOSI rather than FSITO) may not have been that costly, as VFP11, being properly pipelined, should not have had much of an issue with that particular sequence, with a bit of compiler help. as the FTOSI is carried by the FMAC, and the FMRS - by the LS pipeline, respectively (which two pipes have no natural hazards), a sequence of those two ops would suffice to get a minimal reorder in the form of an fixed offset to the FMRSs and it'd execute fairly well (FTOSI is t:1, l:8, and FMRS - t:1, l:4). the compiler would just need to be allowed to do loop strength reduction/unrolling (as surely there'd be a loop in there somewhere), and it's all good to go.

btw, speaking of iphone's vertex buffer shortcomings - i've heard it from others too, so i've been planning to investigate on that particular issue. that insistence on continual buffer copies/touches sounds like too dumb a way for the driver to kick itself where it hurts most. there should be something behind it. we'll see..


Yeah, it ran pretty well ( although , any code that uses strh in a loop is less than optimal) ..

On the other hand, the same code running on Pandora could potentially end up being dog slow - given what you guys are suggesting.

This is of interest to me because I plan to port my stuff to Pandora and thus without a properly optimizing compiler , I may have to resort to rewriting some parts in pure asm.

Anyway, at this point without having access to the real thing, I can only speculate ..
 
Last edited by a moderator:
warmi said:
Yeah, it ran pretty well ( although , any code that uses strh in a loop is less than optimal) ..
Off topic, but I'm curious as to why you say this. Particularly the stress on "any code", suggesting that there's always a better alternative.

If halfwords are the natural output of your algorithm then it'll take some extra instructions and potentially registers to package them into words or multiple words for stm. If you need to do unaligned writes of small runs then this is especially defeating. For instance, writing out 8 pixels at a time for 4bpp tiles that can be shifted per-pixel, like with several emulators.

In the case where you're writing into cache then writing larger units won't save anything - a word write is one cycle and a halfword write is two, but it'll take you an extra cycle to package the two into one. stm costs a cycle for each word; you'd save some icache but that's about it.

In the case where you're not writing into cache then the write buffer will buy you some time to mask the write - if you're doing enough other code around the write then it won't make a difference. If you aren't then whether or not the wider writes will win depends on the platform. On the ARM9s on GP2X and Wiz there's a 4 address/16 word write buffer w/o coalescing, so you potentially win there. But Pandora's Cortex-A8 implements coalescing, so you don't get anything. The hardware will be merging the writes for you.

Also bear in mind that on the Wiz the burst length is a measly one word, so I don't think lining up the write buffer is going to do an awful lot for you since it won't drain any faster this way. I don't know what the burst length on GP2X is, but I do know that using stms of 8 instead of 4 didn't seem to win me anything (whereas 4 over halfwords was tangible)
 
Last edited by a moderator:
Exophase said:
warmi said:
Yeah, it ran pretty well ( although , any code that uses strh in a loop is less than optimal) ..
Off topic, but I'm curious as to why you say this. Particularly the stress on "any code", suggesting that there's always a better alternative.

If halfwords are the natural output of your algorithm then it'll take some extra instructions and potentially registers to package them into words or multiple words for stm. If you need to do unaligned writes of small runs then this is especially defeating. For instance, writing out 8 pixels at a time for 4bpp tiles that can be shifted per-pixel, like with several emulators.

In the case where you're writing into cache then writing larger units won't save anything - a word write is one cycle and a halfword write is two, but it'll take you an extra cycle to package the two into one. stm costs a cycle for each word; you'd save some icache but that's about it.

In the case where you're not writing into cache then the write buffer will buy you some time to mask the write - if you're doing enough other code around the write then it won't make a difference. If you aren't then whether or not the wider writes will win depends on the platform. On the ARM9s on GP2X and Wiz there's a 4 address/16 word write buffer w/o coalescing, so you potentially win there. But Pandora's Cortex-A8 implements coalescing, so you don't get anything. The hardware will be merging the writes for you.

Also bear in mind that on the Wiz the burst length is a measly one word, so I don't think lining up the write buffer is going to do an awful lot for you since it won't drain any faster this way. I don't know what the burst length on GP2X is, but I do know that using stms of 8 instead of 4 didn't seem to win me anything (whereas 4 over halfwords was tangible)



Well, I shouldn't have said "any" ... but generally if the buffer you are writing to is continuous and you are doing it in a loop , I empirically found out that shifting half words around and storing data using multiple stores is a pretty big win in terms of performance ( even if it means having a special cased code for handling misaligned data at both ends of the buffer)

PS.
I am not refering to Cortex as I don't have real world experience with their latest CPU.

My experience comes mostly from Arm7/Arm9/XScale and recently Arm11

PS2.

Actually , I think you are right ... when I think about it , most of my code I wrote couple of years back was related to framebuffer and even though the natural output in this case was halfwords ( i.e 65 K framebuffers)
my bias towards not using ldrh/strh was mostly related to compacting data to achieve a crude version of SIMD style operations and final stmias were mostly beneficial side effect of that approach.

Recently, my ASM coding was almost exclusively focused on optimizing VFP11 so frankly, my knowledge of the main ARM unit behaviour is somewhat dated , especially with Cortex 8, I will most likely end up relearning everything from scratch :)
 
Last edited by a moderator:
Back
Top