GP2X Errors That Afect Us


"char" etc isn't an error, and isn't an unknown rare problem :) (ie: you coudl go listing 700 pages of C++ FAQ ifd you're going tom include stuff like that ;)

ie: 'char' is almost always 'signed', but not always; it is usualyl best to typedef to something and use that.

ex: I usually use Int32, Int16, Int8, UInt32, UInt16, UInt8 as they're not in conflict with various other pseudo-standard ones (such as uint_32 and _uint and such), and as such are easily defined.

I think it pretty much is an error for a compiler to be built with 'char' being unsigned though since simple types tend to be signed. ie: "int" is signed until you add 'unsigned' to it, so char should be as well. Still, the spec doesn't say it..

jeff

ie: "Int8" is "signed char" in a typedef, for the curious
 
I almost always use u32, s32, u16, s16, u8, s8 rather than the normal types. Too lazy to type the full thing. :p
 
TKF15H posted on Feb 6 2006 at 05:35 PM said:
I almost always use u32, s32, u16, s16, u8, s8 rather than the normal types. Too lazy to type the full thing. :p

ditto, I'm a lazy bast :)

Something though I just typedef things like x, y, z, as it's even easier, but a nightmare when you come back to it a month later and start wondering if they are variable names or types :D

eg.

at the start of the program as globals:
z *p,*q;

much further down in a well nested function:
p=(z*)q;

lovely :)
 
Last edited by a moderator:
The Uint32 things are already defined by SDL if you want to use them.

And otherwise, the 'char' thing is very obscure. You should only be using it for string characters anyway, and then who cares if it is signed or unsigned?
 
RiX0R posted on Feb 6 2006 at 06:12 PM said:
The Uint32 things are already defined by SDL if you want to use them.

/usr/include has a few greps worth grok'ing too, yo ..

And otherwise, the 'char' thing is very obscure. You should only be using it for string characters anyway, and then who cares if it is signed or unsigned?

'only be using' chars for chars? ARM doesn't just do 32-bit, you know ..
 
Last edited by a moderator:
torpor posted on Feb 6 2006 at 12:15 PM said:
'only be using' chars for chars? ARM doesn't just do 32-bit, you know ..

Doesn't it just sign-extend anything less than 32-bit?
 
Last edited by a moderator:
At the end of the day, if you can do 32-bit maths for the same cost as 8-bit maths, you'd only want to explicitly use 8-bit if you were relying on its wrapping behaviour, or if you were storing a huge number of them in an array or packed struct, to help your cache coherency. (Memory usage itself is unlikely to be significant on this system, we're rolling in it.)

Note that most platforms seem to pad chars to 32 bits of memory in most contexts (e.g. function arguments, local variables, probably global variables too). So unless you really want some special property of the char, just use an int.

Character constants like 'X' are ints anyway. :)
 
ARM has instructions for handling bytes, words, and half words, but your best off using words whenever possible, as it's the native type. Also, don't forget the sign is bit 31, so if you want to subtract numbers/etc in byte or half-word format and get a proper sign, your going to have to left shift them by 24 or 16 bits first, and then right shift the result by the same amount.

Saying that though, bytes still have the usefullness. For example, I have a nice buffer full of bytes, it's far faster to clear this buffer than if I just used words, as I can clear 4 bytes at a time rather than just 1.

Same goes for frame buffers, use half-words, but treat them as words for higher speed, just make sure you align the buffer.
 
Squidge posted on Feb 7 2006 at 12:17 PM said:
ARM has instructions for handling bytes, words, and half words, but your best off using words whenever possible, as it's the native type. Also, don't forget the sign is bit 31, so if you want to subtract numbers/etc in byte or half-word format and get a proper sign, your going to have to left shift them by 24 or 16 bits first, and then right shift the result by the same amount.

What representation is used for integers?
Ca1, Ca2? sign+value?

Is it faster doing things this way? It is because the memory adressing?

What about parallel arithmetic operations?
 
Last edited by a moderator:
Back
Top