A_SN posted on Feb 1 2007 at 02:31 PM said:
Gary Miller posted on Feb 1 2007 at 02:21 PM said:
depending on how accurate you need the result to be you could drop some fractional bits to reduce the the chance of overflow. That might meets your needs but as always fixed point math will always have some issues with accuracy. If the integer part of the 24.8 time the integer part of the 8.24 will not exceed 32767 or be less that -32768 then you could right shift the 8.24 by 16 to make it an 8.8 and then do the multiply times the 24.8 to produce a number that will fit in a 16.16. Based on the number ranges you could play with both or just one of the numbers to reduce the chance of overflow.
Actually my fixed point maths gives results much more precise than with floats, this is because floats only have about 7 significant digits. I think you're right, I need to determine how much precision I really need. being said, my final algorithm will be completely from this one anyways.
rixed : yeah I guess I could try and make it like it's a 16.16 * 16.16 multiplication instead. I don't really understand the mix of ASM and C (usually I deal with functions in separate .s files), I think I'll have to compile your function to figure it out.
As for your qdiv function, yours is unsigned, mine is signed. Btw what's your inputs and output fixed format? Did you try both of my qdiv functions? As for the strange results you got, did you make sure that both your inputs were in 24.8 and that your result was in 8.24?
Single precision has around 23 bits of precision with a sticky bit or two for rounding. double and extended of course have more (and use an assumed 1.0 bit which gives you one more bit of precision, single for some reason wastes that bit). You can certainly do a hybrid where you use say a full 32 bit word for significant bits and another for the exponent. Perform the math the same way as the soft fpu does, end up with more precision and less overhead (none of that IEEE garbage that is tacked on). The benefit to an fpu is that it does the normalization fast, in software you have to do it manually (and they consume lots of logic to do the multiplies and divides in fewer clocks). Although I assume you could make a neat table driven thing to save some steps in software I have not looked at the sun code that everyone uses in a long while. Also an fpu, soft or hard, keeps the significant bits left justified where fixed generally thinks in terms of right justified. Left justified is much easier if you are willing to sacrifice the least significant digits. I have been thinking about this watching this discussion and wondering what games could be played at the cost of precision, for example to keep your 20-32 bit precision you do need to do 40-64 bit multiplies and divides (you only need a 33 bit for add). What if you tossed the precision first, shifted both operands right 16 and only do a 32 bit operation. Basically you have only 16 bits of mantissa. It would be like living in a 16 bit world but you dont have to worry about things like multiplying 65536 times 7. A nice thing about float or pseudo float is that if your divides are constants then you can turn them into multiplies. Dividing by 7 is the same as multiplying by 1/7, which you can do with a float system and take advantage of the hardware multiply (sure you can synthesize this in a fixed system too).
it would be a very interesting project to keep the mantissa and exponent separate. Another would be to simply trim the ieee garbage out of the stock soft fpu. Very few actually understand the whole IEEE spec and its ramifications much less use it...Alternate formats are faster, I dont know why someone (gcc) doesnt offer an alternative for soft float situations...Because of IEEE, few if any hard FPUs are bug free, to claim IEEE compliance they therefore have to monitor everything through a wrapper or trigger and handle certain exceptions in software thus defeating performance for compliance (or like Intel they just let the bugs flow right to the application, few Intel FPUs pass testfloat)...It took our team about 3-4 years to get a fully IEEE compliant FPU in hardware (passed testfloat level 2). Using the TI floating point format, it took less than three DAYS to design the hardware, test software, and pass a test that exceeded testfloat level 2 (around half a billion test vectors). Granted we only did multiply and add on that one. Give us a month we could have had div, sqrt, etc. You get an idea of what I am talking about. TI DSPs take the speed approach, most if not all applications would never know about an overflow, underflow, quiet or signaling nan, infinity, etc. Get rid of all of it, have overflows give the properly signed max value, instead of the properly signed infinity on a divide by zero you get the properly signed max value. Quiet and signaling nans will drive you crazy as well as the status bits. Most developers dont check for anything, run their programs, dont get crashes or exceptions and are happy, when they get a crash or exception add code to avoid that problem. No different on a non-IEEE format you hit a max value or zero along the way and you will know about it, go back and change the code to prevent it from happening. I think the only question is do you round or truncate, once you say round then you have to ask do you go to the extremes of IEEE and offer round up, round down and round to zero or do you just pick one?
Anyway, the point was even single precision has more than 20 bits of mantissa, yes there are around 7 bits of exponent, that is true, but many many more than 7 bits of precision in any floating point number. Hopefully I did not go on this rant because I mis understood the 7 bits of precision thing, if so, sorry...I have been pretty good keeping my mouth shut on this one thus far.
Note, if/when testing fixed vs float it is a good idea to use primes, avoid one and zero as the fpu will take shortcuts. Also be very careful of your precision
float a,b,c;
a=b+1.0; is NOT the same as a=b+1.0F;
The second one is faster, the first, b has to be converted to double, the operation is performed as a double, then converted back to single to store in a. Where the latter, both operands are single, the result is single and immediately stored in a.
Even better:
a=b+1;
Depending on your compiler, integers are exact in C floats are not, things like this often happen with C compilers:
a=3.0;
b=2.0;
c=a-b=1.0;
if(a!=0.0) printf("something is wrong\n");
Probably not with those specific constants, but very often you will not get the right number. The comparision in the if is doing a bit level check, not a value check. IEEE allows for plus and minus zero, if the result is minus zero and your hand coded constant above is turned into a plus zero by the compiler you wont get the result you expect and may never figure out why. (The TI format has a similar problem with zero). This doesnt always happen just with zero.
So what you should do is develop a habit of:
int i;
i=(int)a; if(i!=0) printf("something is wrong);
or
if(((int)a)!=0) printf("something is wrong");
Sorry for the tangent there, years of floating point work and frustration with the compilers. I realized that I am surprised that anyone every gets floating point to work in C. The odds of the compiler doing what you asked are low when it comes to floating point. this is relevant I guess, the early posts all said look at the assembler, the more you look at the assembler for float code the more you realize what each compiler does and doesnt do. Also remember according to the test float guy (Hauser) I think it was most of the bugs we see in the hard fpus today (intel for example) is in the precision conversion. So make sure that F is on the end of all single point constants and not there for all double constants. Speed improvements in soft fpus and bug reduction in hard fpus.
Have fun...