Our New Machine, Pandora

Should this thread die?

  • Yes, kill it now!

    Votes: 0 0.0%
  • Maybe move it to off-topic?

    Votes: 0 0.0%
  • No, let it live until its natural death!

    Votes: 0 0.0%

  • Total voters
    0

Status
Not open for further replies.
Speaking of OS and installing, we should make it so that firmware and other important things do not require an SD card at all.

Actually it should only require WiFi or a USB connection. A queue of sorts can be made and when restarted and confirmed on turning on allow the firmware to be updated (allow disallow for viruses and stuff for firmware writes).

I am mentioning this very specifically because of the concern of SDHC support, which might have to be updated later in the future. Nobody knows if 16 GB or 32 GB cards or even larger will work.
 
Yonkaku said:
Speaking of OS and installing, we should make it so that firmware and other important things do not require an SD card at all.

Actually it should only require WiFi or a USB connection. A queue of sorts can be made and when restarted and confirmed on turning on allow the firmware to be updated (allow disallow for viruses and stuff for firmware writes).

I am mentioning this very specifically because of the concern of SDHC support, which might have to be updated later in the future. Nobody knows if 16 GB or 32 GB cards or even larger will work.
Why wouldn't the new cards work?

I would assume that the firmware can update itself and could be stored on the internal storage.

Also, how about this idea: an internal, removable flash cartridge for swap space that can be replaced when it wears out!

An internal USB port and a small amount of space would be very, very cool, as we could format a small, cheap flash drive as swap and just replace it when it wore out. :D
 
Last edited by a moderator:
atomicthumbs said:
Why wouldn't the new cards work?
Remember the fiasco with 4 GB/2 GB cards before SDHC? How lots of consumers complained that cards only formatted to 1 GB or did not format at all?

I'm pretty sure it can and will happen again.
 
Last edited by a moderator:
atomicthumbs said:
So you have a toolchain running on your GP2X? Are you on Jupiter, and in the process of being crushed by the incredible gravity and pressure, and thus misunderstand my statement?
oh you mean coding and compiling USING the console... yes i musunderstand, I'm not such at this nerdie level to think things like this but hey, that would be SO NERD!!!!! I'm scared. So when all around me are playing Wario Ware touched on ds I'll be compiling my own game directly on my own console :eek: that will be totally crazy, thanks oh my guru :eek:

*Eclipse gains +500 EXP and learn NERD POWER*
 
Last edited by a moderator:
Eclipse said:
atomicthumbs said:
I assume we'll be able to work on and compile our own programs and apps on this thing. As far as I know, this'll be the first video game console where you can do that!
err... are you talking of the Gp32? or Gp2x?
'cause i've compiling my own stuff on gp2x just right now... are you on Mars?


Now that sounds ambiguous. Are you saying that you compiled your program using a compiler running on your GP2X, or saying that you used a cross-compiler on your PC to compile programs for your GP2X? In the first case, you must be talking about that TinyC compiler, and honnestly I hardly see anyone actually coding, compiling with something else than gcc *on* the GP2X, which will be fully possible on the Craiginator since we'll have a fully featured toolchain with gcc and SDL *on* the console, and a keyboard, which will ease typing greatly, which, I believe, was what atomicthumbs was talking about, and which is actually a pretty exciting thing.

In the second case, how can I say this without insulting your intelligence.. mmmh.. I can't actually ;)

EDIT : Hmm, I'm a bit late it seems ;)
 
Last edited by a moderator:
Exophase said:
I'm just not seeing it. How will fetching multiple instructions at a time increase throughput when the execute stage of the pipeline will always take one cycle to complete? N instructions = N cycles spent in the execution unit.

I'd always assumed that the pipeline was double-clocked as well, but only able to take advantage of it under certain conditions - hence 1.1MIPS/MHz instead of nearer 2.0. But having sat and read big chunks of Steve Furber's book (what an exciting Friday evening) I now think that it's slightly misleading marketing on ARM's part.

In his book Furber says, "Taken together these features enable the ARM10TDMI to acheive a dhrystone 2.1 MIPS per MHz figure of 1.25, which may be compared to 0.9 for the ARM7TDMI and 1.1 for the ARM9TDMI." That's *Dhrystone* MIPS, not ARM MIPS. But ARM aways just say "1.1 MIPS /MHz", which sort of implies they're talking about native *ARM* MIPS.

So I think that icurafu was probably right the first time.

In his book Furber goes on to say that the ARM10 really does execute more than one instruction per clock. If I understand it correctly the branch prediction logic can remove branch instructions before they get into the pipeline, effectively "executing" them in 0 cycles. I've never really looked at the ARM10 before (never worked on anything that used one) so I'll go back and study the sections on it carefully.
 
Last edited by a moderator:
atomicthumbs said:
Eclipse said:
atomicthumbs said:
I assume we'll be able to work on and compile our own programs and apps on this thing. As far as I know, this'll be the first video game console where you can do that!
err... are you talking of the Gp32? or Gp2x?
'cause i've compiling my own stuff on gp2x just right now... are you on Mars? :huh:
Gamepark Holdings did also contests on gp2x and both this consoles ( gp32_console :gp2x ) lived only on the apps made by their users, lol.


So you have a toolchain running on your GP2X? Are you on Jupiter, and in the process of being crushed by the incredible gravity and pressure, and thus misunderstand my statement?

You could install a compiler on the GP2x ;) If you wanted to.
 
Last edited by a moderator:
Firefox said:
Exophase said:
I'm just not seeing it. How will fetching multiple instructions at a time increase throughput when the execute stage of the pipeline will always take one cycle to complete? N instructions = N cycles spent in the execution unit.

I'd always assumed that the pipeline was double-clocked as well, but only able to take advantage of it under certain conditions - hence 1.1MIPS/MHz instead of nearer 2.0. But having sat and read big chunks of Steve Furber's book (what an exciting Friday evening) I now think that it's slightly misleading marketing on ARM's part.

In his book Furber says, "Taken together these features enable the ARM10TDMI to acheive a dhrystone 2.1 MIPS per MHz figure of 1.25, which may be compared to 0.9 for the ARM7TDMI and 1.1 for the ARM9TDMI." That's *Dhrystone* MIPS, not ARM MIPS. But ARM aways just say "1.1 MIPS /MHz", which sort of implies they're talking about native *ARM* MIPS.

So I think that icurafu was probably right the first time.

In his book Furber goes on to say that the ARM10 really does execute more than one instruction per clock. If I understand it correctly the branch prediction logic can remove branch instructions before they get into the pipeline, effectively "executing" them in 0 cycles. I've never really looked at the ARM10 before (never worked on anything that used one) so I'll go back and study the sections on it carefully.
Perhaps you could point me to the book you're reading sometime, it sounds like pretty good material. Is ARM10 actually used in anything? I imagine ARM11 is superior as well? I guess Dhrystone MIPS is what a "DMIPS" is (in which case, that's what the Craiginator is specced for so I guess that's fair but if we're being technically accurate MIPS should measure actual instructions, not operations)? I wonder what a DMIPS is anyway... Like, literally, a composition of what percentage of what operations that is would be great. I wouldn't be surprised at all if you suddenly get more DMIPS by having a stronger instruction set with less stalls. 1.1 over 0.9 might be what you get with good branch prediction over no branch prediction and faster multiplies (I think..).

I think removing branches is a standard technique on some other platforms. I know I saw it in the PowerPC documentation. I've heard of CPUs where the branch prediction is part of the instruction cache. I wonder if that ties in?

EDIT: http://www.netlib.org/benchmark/dhry-c < this has a nice statistics section giving percentages of what it measures. Also apparently DMIPS is normalized against a VAX machine that was considered to be 1MIPS. That original figure could have just been exaggerated (what machine is exactly 1MIPS?)
 
Last edited by a moderator:
Yonkaku said:
atomicthumbs said:
Why wouldn't the new cards work?
Remember the fiasco with 4 GB/2 GB cards before SDHC? How lots of consumers complained that cards only formatted to 1 GB or did not format at all?

I'm pretty sure it can and will happen again.


That was because of the original built-in limit of 4GB on the original specification. However, some implementations couldn't handle 4GB due to the way they were written. Hacks were used to get several implementations working, that screwed up other implementations. The new specification supports upto 32GB.

So we shouldn't really see a problem until 64GB cards come out, as that'll be when the new specifications change again.

Exophase said:
Perhaps you could point me to the book you're reading sometime, it sounds like pretty good material. Is ARM10 actually used in anything? I imagine ARM11 is superior as well? I guess Dhrystone MIPS is what a "DMIPS" is (in
Isn't the MX31 an ARM10 SoC?
 
Last edited by a moderator:
OMars said:
I would love to go back in time, but the damage is done. :(
Ok, now I'm over it... :blink:


It's not your fault they are a bunch of stubborn ass faggots who wank at the sight of tux and don't listen to anyone who says shit-nix is not completely perfect


(I use gentoo linux on a desktop- I have also used ubuntu. Linux has just as many flaws as windows.)
 
Last edited by a moderator:
PoisonedV said:
OMars said:
I would love to go back in time, but the damage is done. :(
Ok, now I'm over it... :blink:


It's not your fault they are a bunch of stubborn ass faggots who wank at the sight of tux and don't listen to anyone who says shit-nix is not completely perfect


(I use gentoo linux on a desktop- I have also used ubuntu. Linux has just as many flaws as windows.)


I see where you get your sig now.

For the record, I'm using Kubuntu and Compiz Fusion on a computer with a third the RAM and 400MHZ less CPU powerthan my other computer, and a 32MB Geforce 2 Go video card, and it performs faster than Windows on the other computer.
 
Last edited by a moderator:
Just to join in on the "which flavour of Linux should it run?" debate, I'd like to point out that from turning my Ubuntu laptop on to getting to a desktop is probably 2-3 minutes.

I do NOT want a handheld that takes even 2 minutes to load, thank you very much. Seeing as most of that is after X has been loaded, it seems likely to me that, whilst X may be part of whatever firmware it runs on, it probably shouldn't be loaded unless the user decides they need it.

E.g. Menu a little like Gmenu2x as the default with permanent options of "Load KDE desktop" or "Load Gnome Desktop" respectively.

That way we get from turning on to plaing games *fast*, but if we need full desktop (e.g. for PDA-like abilities) then they're there.

I'd like to stress once more that dumping a full Linux distro on the new console, unless it actually does load in under 12 seconds, is a very very bad idea.
 
Tobriand said:
I'd like to stress once more that dumping a full Linux distro on the new console, unless it actually does load in under 12 seconds, is a very very bad idea.
It's all a matter of playing with the boot scripts and selecting the fastest init system - you could most likely get Debian down to that speed with some work, but it isn't done on the desktop because you'd end up with a useless system (whereas a system like this doesn't need most of the stuff).
 
Last edited by a moderator:
I understand why you people are suggesting Debian, but what's with all this ubuntu shit? ubuntu is WAY too dumbed down, and most of the stuff that is in it that makes ubuntu what it is would just be a waste of space and boot time on a handheld like the Craiginator.

I think that we should just go with a more base distro like Debian and then work on getting that to how we want it.

-God Ginrai
 
God Ginrai said:
I understand why you people are suggesting Debian, but what's with all this ubuntu shit? ubuntu is WAY too dumbed down, and most of the stuff that is in it that makes ubuntu what it is would just be a waste of space and boot time on a handheld like the Craiginator.

I think that we should just go with a more base distro like Debian and then work on getting that to how we want it.

-God Ginrai
Calm down.

I thought it was confirmed that it was going to be Open2X based and Debian compatible.

(PS: It's only the GNOME part of Ubuntu that's dumbed down. KDE is not, that's why I use Kubuntu. I used to use Debian Etch, and it's about the same, except (K)ubuntu releases and upgrades more often.)
 
Last edited by a moderator:
That was calm. I just seriously think that about ubuntu. If I wasn't calm, it would be much longer, and laced with many more insults.

-God Ginrai
 
Tobriand said:
I do NOT want a handheld that takes even 2 minutes to load, thank you very much. Seeing as most of that is after X has been loaded, it seems likely to me that, whilst X may be part of whatever firmware it runs on, it probably shouldn't be loaded unless the user decides they need it.

E.g. Menu a little like Gmenu2x as the default with permanent options of "Load KDE desktop" or "Load Gnome Desktop" respectively.

That way we get from turning on to plaing games *fast*, but if we need full desktop (e.g. for PDA-like abilities) then they're there.

I'd like to stress once more that dumping a full Linux distro on the new console, unless it actually does load in under 12 seconds, is a very very bad idea.
This is best opeion. Maybe a bootloader.

Thanks to exophase for the link on TI demoing OMAP 3430 at 1ghz. While I still doubt you'll see 800+ mhz in the first batch of OMAPs, it does prove that the package itself can handle those speeds.

http://www.arm.com/news/16539.html
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top