Exophase
Nothing good will ever come of Exophase.
I just make u32, s32, etc.. most concise and it's pretty common..
Depending on the hardware and circumstances, floats can be faster than ints while offering enough precision to avoid errors. It really depends on your needs.I think floating-point arithmetic in general should be avoided. Sure, there are good uses of it, but too often people use floats just to represent rationals or fixed-point numbers. Floats are only useful if you need to be able to represent a large dynamic range, where it suffices to have good relative precision, and where you don't need to do a lot of arithmetic manipulation on the numbers (because that all to easily leads to cumulative errors), and where you don't rely on the rounding to be the same on all hardware.
What particular definition of 'word' are you referring to? Taking very few exceptions aside, int is always 16 or 32bit width by default, even on most 64 and several 8bit (e.g. AVR) platforms. On x86 'word' is even used as a fixed definition for 16bit integers.I think it's a good thing that in C, "int" is something the size of a word. It makes the most sense to use the word size by default, and to use specific intN_t types in case you need to be sure about the range of the int.
I don't think he means some definition at all, but the size of general purpose registers on the target CPU.What particular definition of 'word' are you referring to? Taking very few exceptions aside, int is always 16 or 32bit width by default, even on most 64 and several 8bit (e.g. AVR) platforms. On x86 'word' is even used as a fixed definition for 16bit integers.
Yes, this is what we use, stdint.h for everything non MS, and some typedef's for MS. The only other type we use a fair bit of is size_t but that is really more to work with other libraries (STL being the biggest most common one).C99 introduced types like int32_t you can use.
I am obviously taking this out of context to your post, and the examples in your post certainly show where int can (and probably should) be used over float. But the idea that floating point arithmetic should be avoided isn't really true. Maybe it is more true in certain types of applications (Visual Basic Form based application maybe) but in lots of cases float is the best format to use. We are currently doing some work with Leap Motion (tracks hand/fingers position with a cheap little box), which involves a lot of working with floating point numbers, using trig, giving objects positions/heights etc. sure it is possible to write a fully fixed point engine, but that isn't what happens in games on PC/consoles any more (and hasn't been for a long time), it's all floating point vectors/matrices/quaternions/etc. So I believe the much better advise is user the right datatype for the right purpose! Use type safe enum's over ints[*], use ints for indexing or counting discrete quantities, use float for anything that requires decimcals, and avoid double unless you have a real use case for it (really needing high accuracy), use quats over matrices if you need to do blending, etc.I think floating-point arithmetic in general should be avoided.
Why would a form-based VB app be any different? If you're talking about WinForm based applications (irrelevant of language), then the use case is different to that of a game, and performance working with floating point is nowhere as relevant as the processes are nowhere near as intensive on calculations.Maybe it is more true in certain types of applications (Visual Basic Form based application maybe)
I generally use proper enum typing in C++ but it's a pain to bother with in C. Mind you, most of the coding I do on my own time is C/ASM. I see it as just one of those other things you have to be careful with and know what you're doing with, like macros* the C (over C++) type of people may disagree and say use int's all the time over enum's, but in my opinion this just all round bad
What I'm saying is that you should work with the word size of the architecture you're working on for reasons of efficiency.What are you referring to as a thing that was somewhat true previously but isn't anymore? You mean why you should use int instead of using smaller datatypes for local variables? If that's the case I already explained why..This really makes me wonder why there isn't more standardization over whether an int should be 32bits or 64 bits if it doesn't make a performance difference which one you're using in terms of processing time. I tend to avoiding just the plain old int when I can, just because I dislike allocating more bits than are really necessary to the variable. Especially in cases where I know for a fact that I'm never going to have a number larger than that, save some sort of programming error.
I wonder if this is one of those things that was somewhat true previously, but has been passed on as common knowledge since. I've seen it cropping up from time to time, and now that I think of it, I can't recall there ever being any explanation given as to why it would be the case.
I can see the mentality behind "int" - they wanted something that had a guarantee to be at least a certain length but you could still use larger lengths with. Consider when x86 switched from 16-bit to 32-bit. You could still use 16-bit but it was more expensive because it needed a mode change prefix. It'd be good to know that code didn't rely on the calculations being 16-bit (as most code wouldn't), which is what the int type should have meant. That wouldn't have stopped people from relying on that anyway, though.
It is important though. If you have a really hot array of structs that takes up 64KB because you sized everything int but would have taken 16KB if you used 8-bit variables then that can make a huge difference for performance in the right circumstances. If you're going for high efficiency you should try to be at least somewhat conscious of your data types and how it impacts locality of reference, which means trying to minimize size, trying keep things grouped together, and trying to work on relatively small batches.What I'm saying is that you should work with the word size of the architecture you're working on for reasons of efficiency.
And it may very well no longer be the case as the move from 32bits to 64bits was much less problematic than the move from 8bits to 16bits was or the move from 16bits to 32bits.
Not that this is terribly important at this stage if it's primarily a matter of memory usage.
I never write 10.0f. If the compiler isn't "optimizing it away", I should not be using such a piece of shit compiler.float speed = 10.0f;
If you dont write the f the 10.0 is seen as a double value and it will be converted (casted) into a float during runtime which costs a few cycles (at least if the compiler is not optimizing it away).
use float for anything that requires decimals
I think you meant to say that it's not a good idea to use binary floating point numbers. Using decimal floating point numbers is perfectly acceptable in these cases.I don't agree: if you need a fixed number of decimals (e.g. 2 in the case of money) and you want to work with exact numbers, it's not a good idea to use floats because when the amount becomes large enough, you'll lose the precision to store the decimals.
No, using fixed point numbers is what you need in those cases. The base does not matter - and decimal is very rare anyway, although decimal coded in binary (4 bits per digit) has historically been used.I think you meant to say that it's not a good idea to use binary floating point numbers. Using decimal floating point numbers is perfectly acceptable in these cases.I don't agree: if you need a fixed number of decimals (e.g. 2 in the case of money) and you want to work with exact numbers, it's not a good idea to use floats because when the amount becomes large enough, you'll lose the precision to store the decimals.
I see your point, but...I never write 10.0f. If the compiler isn't "optimizing it away", I should not be using such a piece of shit compiler.float speed = 10.0f;
If you dont write the f the 10.0 is seen as a double value and it will be converted (casted) into a float during runtime which costs a few cycles (at least if the compiler is not optimizing it away).
What's the point of doing manual optimization so that your code runs faster on a non-optimizing compiler? First use an optimizing compiler!
As for float vs double, don't forget to use suitable optimization options for the Pandora, as featured in my sig! this can make a huge difference
The base does not matter if you're using a representation based on integers with an implicit divider (e.g. store 100 times the money amount instead of the amount itself).The base does matter. Especially with money you don't want to use binary fractions, but decimal fractions.
Well, you might start with exact units of currency, but then at some point need half unit (50% off 0.99 unit price), depending on your use case, you might wish 50% off 0.99 to leave you with 0.49 (you can't charge your customer a fraction), however in other cases you might wish to keep the full accuracy (for example when the 50% off is part of a larger calculation, as in if I give a third party 30% of the revenue my product makes after 11% of advertising overheads are subtracted). So I'd really just stick with my advise of using the right data type for the right job. On the whole, I certainly agree there can be cases where int is good for working with money, no argument there.. my argument of if you need decimals then use float is not right on this grounds, I guess this rule needs tweaking a little!if you need a fixed number of decimals (e.g. 2 in the case of money) and you want to work with exact numbers, it's not a good idea to use floats because when the amount becomes large enough
Well you need to write the .f suffix to specify whether you wish to use a double or float number, consider : num / 12.0, do you wish to perform double precision division or single precision division here? The compiler can't simply optimize it assuming you always want to use floating point numbers (unless you tell it and it supports flags to allow this). It is like : float result = num / 12, in this case if num is integral you'll get an integral divide of result and 12, which is largely different to a floating point division. On top of this, if you consider function overloading means you can have two different functions that will be invoked based on whether you pass float or double, e.g. MyFunction( 1.0f ) and MyFunction( 1.0 ) can call different pieces of code. Sure you might not ever do this, but what about the libraries you are calling.I never write 10.0f. If the compiler isn't "optimizing it away", I should not be using such a piece of shit compiler.
I selected form based VB Apps as my guess is if I Google for the first form based VB apps I can find, and look at the data types being used, I think there is a sporting chance it will have less floating point numbers than the code I typically work on. But this is not substantiated at all, and your point is valid. Really though, all I was saying is that there might be some types of coding which uses less floats than others, which is fine. There was certainly no intent to 'diss' VB of form applications.Why would a form-based VB app be any different? If you're talking about WinForm based applications (irrelevant of language), then the use case is different to that of a game, and performance working with floating point is nowhere as relevant as the processes are nowhere near as intensive on calculations.
Sounds like you fight the good fight the pointlessness in C is true, and some 'hardcore C' coders take the same practises to C++, where typedef enum's can make coding easier (in terms of knowing what to pass a function, and also being told when you have made an error). I can't tell you how many times I have seen (sometimes written myself, sometimes written by others) OpenGL C function calls where an 'enum' value is passed something complete wrong, with no compile time error thrown, and can often be hard to spot when looking through the source (especially if the value being passed looks on the fact of it to be sensible).I generally use proper enum typing in C++ but it's a pain to bother with in C. [...] I think the lack of actual type checking in C makes it borderline pointless anyway.
If it helps any, I generally name my enum values very.. descriptively. To the point where it becomes really obvious if you use them out of context.Sounds like you fight the good fight the pointlessness in C is true, and some 'hardcore C' coders take the same practises to C++, where typedef enum's can make coding easier (in terms of knowing what to pass a function, and also being told when you have made an error). I can't tell you how many times I have seen (sometimes written myself, sometimes written by others) OpenGL C function calls where an 'enum' value is passed something complete wrong, with no compile time error thrown, and can often be hard to spot when looking through the source (especially if the value being passed looks on the fact of it to be sensible).