32 Bit Of 64 Bit?


BTW, 64-bit isn't necessarily 'better' than 32-bit. You only get that impression from some Intel chips, because of the well-known 4 GiB memory limitation with 32-bit Intel chips.

On ARM, this is less of an issue since you usually can't swap hardware, and since we won't have more than 4 GiB of RAM on the Pandora.
 
dflemstr said:
BTW, 64-bit isn't necessarily 'better' than 32-bit. You only get that impression from some Intel chips, because of the well-known 4 GiB memory limitation with 32-bit Intel chips.

On ARM, this is less of an issue since you usually can't swap hardware, and since we won't have more than 4 GiB of RAM on the Pandora.

Technically there are 36 address pins on Intel (and AMD for that matter) 32bit chips, which together with PAE kernels can address up to 64GiB (32 on Windows because someone thought signed int was a good idea). It lifts the 4GiB (2GiB on Windows) limit for the entire system, however single applications are still limited to 4GiB (2GiB on Windows) each. The drawback of course is the mapping table that takes up almost 1GiB of memory.
 
Last edited by a moderator:
There's way more to bit rating than just address space. The question of how many "bits" something is is very vague, especially when you use it in the context of "game consoles" where common marketing has allowed "bits" to mean just about anything.

By the criteria that Pandora is 32bit so are XBox, Dreamcast, Gamecube, and Wii.
 
zhasha said:
(32 on Windows because someone thought signed int was a good idea). It lifts the 4GiB (2GiB on Windows) limit for the entire system, however single applications are still limited to 4GiB (2GiB on Windows) each. The drawback of course is the mapping table that takes up almost 1GiB of memory.
Recent versions of Windows 32-bit (and older pro versions) can have processes addressing 3 GB of virtual space.
And even for Linux kernels, you can't reach 4 GB of virtual address per process because part of the address space is reserved for the kernel.
So on that particular point Windows and Linux are equal :)
 
Last edited by a moderator:
Laurent said:
zhasha said:
(32 on Windows because someone thought signed int was a good idea). It lifts the 4GiB (2GiB on Windows) limit for the entire system, however single applications are still limited to 4GiB (2GiB on Windows) each. The drawback of course is the mapping table that takes up almost 1GiB of memory.
Recent versions of Windows 32-bit (and older pro versions) can have processes addressing 3 GB of virtual space.
And even for Linux kernels, you can't reach 4 GB of virtual address per process because part of the address space is reserved for the kernel.
So on that particular point Windows and Linux are equal :)

Windows XP. I just had a look at msdn and it turns out they stopped the madness, at least with Windows 7. What I've seen with XP though is that if you put 4GB of RAM in, you get little over 3GB available to you, as the kernel switches on PAE which isn't technically necessary before you go over 4GB.
And please, all kernels reserve memory and call it "kernel space" in one way or another. It's not exclusive to linux.
 
Last edited by a moderator:
Pfft, everyone knows bit-count is the king; After all, the Atari Jaguar was the first 64-bit machine, and see how great it was?

Discussion pwned ;)

jeff
 
Tempel said:
But this means the Pandora will be susceptible to the year 2038 problem! I dream about my Pandora lasting that long.

The year 2038 problem is a silly overhyped issue. Not nearly as bad as y2k, but still quite horrible. A 32 bit CPU *can* handle numbers larger than 32 bit, by using two registers for it (essentially making it a 64 bit number). gcc/g++ can already handle this easily, using signed long long.

Code:
[gary@fluffy ~]$ cat a.cpp
#include <iostream>
#include <stdio.h>
#include <stdlib.h>


int main(void) {
	signed long currenttime = 2147483645;
	signed long long currenttime2 = 2147483645;

        for (int i = 0; i < 5; i++, currenttime++)
                printf("%i\n", currenttime);

	for (int i = 0; i < 5; i++, currenttime2++)
                printf("%lld\n", currenttime2);



	return 0;
}

Result running on a 32 bit machine:
[gary@fluffy ~]$ ./a
2147483645
2147483646
2147483647
-2147483648
-2147483647

2147483645
2147483646
2147483647
2147483648
2147483649

It is only a problem because the Linux kernel still uses signed long as the data type for time. This is probably due to stuck up pricks who maintain the kernel that refuse to use signed long long because "it isn't standard". It will be fixed *long* before 2038.

Edit: I should mention, just because it can handle it doesn't exactly mean it's the best way to do things. I assume most devices will be 64 bit by the time 2038 hits, but it won't magically break every 32 bit linux based device will die. It just means time related functions will take a few more CPU cycles to handle...
 
Last edited by a moderator:
You can build Linux kernel with 64bit time no problem. If a kernel is using 32bit time it's the distro's fault.

[quote name=Laurent]
Recent versions of Windows 32-bit (and older pro versions) can have processes addressing 3 GB of virtual space.
And even for Linux kernels, you can't reach 4 GB of virtual address per process because part of the address space is reserved for the kernel.
So on that particular point Windows and Linux are equal :) [/quote]

You can configure Linux to use a non-split user/kernel arrangement. User programs get the entire 4GB of address space and switching to kernel requires changing address space. Copying things from user to kernel or vice-versa requires mapping mapping windows into the kernel space. The disadvantages are obvious but it still allows for 4GB of user space.
 
However, at least I suspect that 32-bit processors are a long way outdated in ==> 29 <== years when the 2038 bug strikes. I fail to find a graph that shows a guess of average amount of PC RAM by year, but referencing ads from 7-9 years ago, 256-512MB was the common amount, and nowadays we've usually got 2-4GB. After some quick calculations in my head, I end up with 128-256GB being the norm by that time. I'm probably missing the target by infathomable distances, so a few more points of data to make a regression would be nice. If I am that wrong, someone correct me please...

Still, it'll be a long way to go before we fill out the 16EB limit of 64-bit's...
 
No one can look at a crystal ball and say how much RAM people will be using then, transistor size has to hit a limit eventually and we'll need another technology that may be much better or maybe just a little better.

But in desktops 32-bit processors are already outdated (even though a lot of us are using them to run 32-bit code exclusively) and 64bit time in Linux is easy regardless of if the processor is 64bit or not. If by some chance there are still some remaining in 2037 I'm sure they'll be quickly converted over.. the migration path would be pretty simple (a lot more so than Y2K, and we all remember how much of an actual problem that ended up being afterwards)
 
Back
Top