File System(s) On The Gp2x


1024 is the closest power of two to 1000, and it was often misused such that it became the norm to refer to a kilobyte as 1024 bytes.

Although, I was incorrect in saying that 2^10 was easier for the computer to count. (1 byte + 2 bits is far from efficient use). :D

Other way around for both...
 
Last edited by a moderator:
It's 1024, because 1024 == 2^10 . Everyone should know that. It makes sense because computers count in binary not our silly decimal.

(Personally I think that we should use a Base-12 system, because it factors well, but it would be hard to make the switch.)

1024 is the closest power of two to 1000, and it was often misused such that it became the norm to refer to a kilobyte as 1024 bytes.

Although, I was incorrect in saying that 2^10 was easier for the computer to count. (1 byte + 2 bits is far from efficient use). :D

"Those who don't know the past are condemned to repeat it." [George Santayana]

The Babylonions used a base 12 system. That is why we have 12 hours of daylight
/ 12 hours of night (close to the equator). The duration of an hour was variable.

And the English decided there should be 12 inches in a foot.

Actually, 60 is a better base due to its factorability: 2,3,4,5,6,10,12,15,20,30
As in 60 seconds in the minute, 60 minutes in the hour. And 360 degrees in the circle.

Digital Equipment Corporation DID make a 12 bit minicomputer in the early days.
And they counted in octal (0,1,2,3,4,5,6,7) to make use of familiar digits.
Using hexadecimal came later but I don't know why :)

The term "word" was / is used to signal the width of the data bus in a computer.
The term "byte" is a play on "bite", as in part (half) a word.
The term "nybble" is a further play on "nibble", as in half a byte.

While "byte" has come to mean 8 bits, "word" is still open to interpretation, as
in "16 bit word", "32 bit word", or "64 bit word".

The non-English speaking world uses "octet" as a substitute for "byte". Octet
is derived from Greek meaning '8'.

IBM coded the English upper case alphabet in 6 bits, as derived from the
teletype standard of the late 1800's, and then extended it to include lower
case (Extended Binary Coded Decimal Interchange Code EBCDIC). It was a
confusing system with gaps and duplications. The American Standard Code
for Information Interchange (ASCII) was proposed and adopted in the 1970's
by the rest of the computing industry. This used 7 bits for data and 1 bit for
parity - hence the pervasive need for 8-bit entities.

Nowadays the computer industry is less Anglo-American centric so the code
system has been extended to Unicode - 16 bits representing ALL the world's
current alphabets and punctuation.

'kilo' is the abbreviation for units of 1000 in the metric / scientific world.
'kilo' is the abbreviation for units of 1024 in the computer world.

Hope that helps.
 
Last edited by a moderator:
Back
Top