The IEEE 754 compliant FPU in Cortex-A8 is not pipelined; yes, it runs in parallel with the integer pipes, but the FPU operations themselves cannot be overlapped and a simple add will take a minimum of around 8 or so cycles. If you can can make this much integer work independent of the FPU work then the floating point computations were probably never that important to begin with. The limitations in pairing on the integer pipes are nothing compared to this.
true, i did bother to check the instruction timings specs of the a8 and its vfp v3 is nothing to write home about*, though it does have the option to delegate work to the neon (nfp) pipeline (fast mode), which improves things a little. unsurprisingly, it does appear that neon simd is the way to go on the a8 (particularly for spatial (vector) arithmetics that'd be a no-brainer), but for that you can still use good old c floats. which brings us to..
QUOTE
The NEON SIMD in Cortex-A8 is obviously not useless, but assuming that the compiler will be able to produce good auto-vectorized code, especially early in the game, is naive. This is especially not going to happen with doubles, which I suppose is another good reason to back away from them. Right now I don't even think the compiler can auto-vectorize, at least not the version that isn't problematically buggy.
auto-vectorization is presently a domain in turmoil, but it is clearly they way to go in mid-to-long term. that said, even today there are good vectorizing compilers. gcc 4.4 is getting there in leaps and bounds (re autovectorization in general), and even though neon support may be still be in the oven, one can check the current rvct (arm's official compiler suite) for a taste of what's to come from neon -
http://www.jp.arm.com/event/pdf/forum2007/t1-5.pdf - Tatsuya Kobayashi's Cortex–A8 and NEON Field Application Engineering presentation, 16th Oct 2007 - check page 29.
also, staying with c floats, and being auto-vectorization-conscious, is the way to go if you're concerned to the slightest with your app's portability (it.e. it not suffering on other platforms where (simd) fpu's are doing great in comparison), but that's somewhat beyond the topic of this thread.
QUOTE
You use language such as "required" when referring to "full blown arithmetic" in a "specially intensive game engine"; the fact is, these kinds of games have been done on fixed point HARDWARE that do not allow for dynamic ranges at all. So it isn't required at all.
well, strictly speaking, nothing is 'required' besides a tape-equipped turing machine, right? almost everything said by me in this thread has been about development convenience, or programmer's efficiency, if you wish. in this regard, floats in the context of 'full blow arithmetics' will be 'required' to the degree that the programmer will most likely wish they had floats under those scenarios. simple fact is, floats are a better abstraction for those purposes, just as high level languages are a better abstraction vis-a-vis assembly for the greater deal of today's programming tasks, allowing programmers not to care about things that ultimate do not affect the end quality of their software.
QUOTE
Yes, having float point hardware is better, I'm not arguing that. And ARM has addressed this with NEON. But if you need IEEE compliant doubles that you're guaranteed to at least get by the compiler then you'll get something that may not be good enough. The VFP unit is "medium performance" which might not cut it. It's something cheap and there mainly for compatibility purposes. Just because it's there doesn't mean that it's the best solution.
see, i'm not arguing about floats being best (performance-wise) on the a8 - i'm arguing about them being adequate, and then they'd win from a convenience perspective ; )
* not the case with arm v6's vfp11, which is a worthy performer, as found in the arm1176.