Possible Ps2 Emu?


ravensoul10

Still Fresh
Joined
Dec 23, 2009
Messages
4
Hiya guys. I'm pretty new here and was just looking at the possible/confirmed emulators list for the pandora a few min ago. I was wondering, other than maybe no one has stepped up to the bat to try and make one, why couldn't a ps2 emu possibly be made? I ask because it sounds like a psp emu is being worked on and I guess is showing good promise atm. I could be totally wrong hardware wise but i believe the psp maxxes out at the same proc clock speed or close to the ps2. Does the ps2 just have more memory/other resources that the psp dosnt that would make the psp easier to create an emu for or are there other reasons? Thanks in advance for any replies to my just popped into my head curious question hehe. Later
 
Nope, not possible. Not a chance. The PS2 sports a vastly superior CPU (PSP: 2.6 GFLOPS, PS2 6.2 GFLOPS), a better GPU (about 6 times faster GPU memory, about four times the pixel fill rate and about twice the polygon throughput)

The Pandora is struggling with PSP HLE already, so there's not a chance in hell that there'll ever be a PS2 emulator.

Sorry to disappoint you :(
 
ah ok interesting stuff hehe. I just never thought the ps2 was that much more powerfull in comparison to PsP as they have about the same graphics to look at...but I guess the psp is prolly running at ? around half the resolution of the ps2 i think? forget what standard def resolution is. I know ps2 emu is, or at least was slow/buggy last time i looked on computer still. Oh well. I still want a pandora lol
 
I wonder if it would even be possible to experiment with for educational purposes. I imagine that it would be so impractical that it might as well be considered as a physical impossibility. Some of these emulators (PSP, Dreamcast, etc) may never get into a playable state on the Pandora, but the work put in could be useful on a future device (Pandora 2?). PS2 emulation is probably another generation of SoCs away.
 
I should just put this back into my signature:

http://en.wikipedia.org/wiki/Megahertz_myth

Unless you're comparing the exact same make of CPU, clock speed means nothing. So even if you have a listing of the CPU / FPU / GPU / IO speeds of the PSP, PS2, Pandora, it means nothing.

Emulation is not usually graphics-limited, at least not for resolution. The PS2 has relatively advanced shaders, though, which would be a lot of trouble to emulate, and the Pandora's GPU wouldn't have any idea how to do them. It would be a CPU emulation of a graphics card, and it would suck.
 
lulzfish said:
I should just put this back into my signature:

http://en.wikipedia.org/wiki/Megahertz_myth

Unless you're comparing the exact same make of CPU, clock speed means nothing. So even if you have a listing of the CPU / FPU / GPU / IO speeds of the PSP, PS2, Pandora, it means nothing.
If you know what you're talking about frequency can be used: the R5900 is a dual-issue 64-bit processor running at 300 MHz; the Cortex-A8 in Pandora is a dual-issue 32-bit processor running at 600 MHz. From there you can deduce the Pandora probably hasn't enough power even for the CPU alone (as a reference n64 has an R4300i which is single-issue 64-bit running at 100 MHz).

So even if frequency should be taken with great care, it can be used to compare things if one is careful :)
 
Last edited by a moderator:
If CPU power doubles every 18 months then a Pandora 4.5 years from now would have an 8 times more powerful CPU, 16 times in 6 years. Of course by then the PS4 and the next Xbox and Nintendo systems will be out.

Imagine what is almost possible today and what a Pandora 2 could do :)
 
Laurent said:
lulzfish said:
I should just put this back into my signature:

http://en.wikipedia.org/wiki/Megahertz_myth

Unless you're comparing the exact same make of CPU, clock speed means nothing. So even if you have a listing of the CPU / FPU / GPU / IO speeds of the PSP, PS2, Pandora, it means nothing.
If you know what you're talking about frequency can be used: the R5900 is a dual-issue 64-bit processor running at 300 MHz; the Cortex-A8 in Pandora is a dual-issue 32-bit processor running at 600 MHz. From there you can deduce the Pandora probably hasn't enough power even for the CPU alone (as a reference n64 has an R4300i which is single-issue 64-bit running at 100 MHz).

So even if frequency should be taken with great care, it can be used to compare things if one is careful :)

That's not even including the two co-processors, which each run at 150Mhz and can do 4 FMACs per clock. (For comparison, the Cortex can do 2 FMACs/clock with more latency)

And the monstrous fillrate of 1200 texel/s (versus about 300 or so for the SGX even including overdraw)

or the fact that the textures are in a completely different format (they make good use of paletted textures, which the SGX emulates slowly)

Or the fact that the FPUs are not standard, thus are incredibly hard to emulate accurately (have a look at the PCSX2 blog about clamping)

Etc

:)

I'm always hopeful, but I think the DC is probably the limit - all of the others simply outgun the Pandora by too much either processor wise, GPU wise or both.

the OMAP4 will probably still be unable to emulate a PS2's GPU, although it has enough raw power to do other stuff. That said, that might be enough of a challenge....
 
Last edited by a moderator:
Awakening said:
If CPU power doubles every 18 months then a Pandora 4.5 years from now would have an 8 times more powerful CPU, 16 times in 6 years. Of course by then the PS4 and the next Xbox and Nintendo systems will be out.

Imagine what is almost possible today and what a Pandora 2 could do :)

Moore's Law states that transistor count doubles every month, not speed. In desktop CPUs peak single core performance has increased roughly 3-4 fold in the past 7 years. Of course, with 4 or 8 cores you can multiply that substantially, but the reason I mention single core performance is because ultimately most of the work in an emulator tends to end up in one core.

Mobile CPUs will probably be increasing in single-core performance more rapidly, but they too will hit a wall and are already going multicore. In 4.5 years I doubt you'll see single-core performance 8x what it is now, and I especially doubt 16x in 6 years. The single core CPU performance in Cortex-A8 is probably not 8x more powerful than what was available in handhelds 6 years ago.
 
Last edited by a moderator:
I fully agree, and I think it's the reason why emulators should move to multicore. Of course there will always be one core that will be the hot spot (the one simulating the CPU), but if you can move some tasks to other cores (sound, graphics, etc.) and have low overhead synchronization it's a win.
 
Laurent said:
I fully agree, and I think it's the reason why emulators should move to multicore. Of course there will always be one core that will be the hot spot (the one simulating the CPU), but if you can move some tasks to other cores (sound, graphics, etc.) and have low overhead synchronization it's a win.

Problem being, of course, that a lot of the synchronization overhead is shouldered be the OS and the hardware itself.

What's needed all depends on the synchronization demands of what's being emulated, which is of course a fuzzy number since it's really questioning how sensitive the software is to timing inaccuracy. For PCSX2 they have determined the switching between the CPU cores (R5900, VUs in micro-mode, IOP) happen every 512 cycles. I think this is still too fine grained for threads in different CPUs to synchronize at w/o introducing more overhead than the emulation of the block itself plus the single core task switch overhead. This might not strictly be the case; spinning on a timestamp counter should be fast enough, although it's still putting out loads to whatever the shared cache is, so probably a few hundred clock cycles. But the single core switching overhead is going to be quite high too, since it'll have to switch register sets. DS games also require tight synchronization between their two CPUs to even boot. It's within several dozen bus cycles.

Graphics can benefit more, but a lot of the work for graphics is already offloaded to GPUs. In some platforms like DS there's still per-line synchronization necessary that games really are still depending on. Since video is usually a write-only system this can be accomplished with queues instead of lockstep synchronization, but it can still get tricky - especially when dealing with large shared memories.

What I think would be really nice for emulation is a little more in the way of hardware assistance in CPUs. ARM would be especially smart to capitalize on this in order to accelerate x86 emulation; Longsoon, for instance, has started down this path. But I don't really mean instructions that do what x86 does and ARM doesn't, I mean some more general purpose functions. For instance, hardware hash tables (ie, software defined TLBs), better access to flags (on x86 anyway), things like this.

On the synchronization front, one thing that would help is assistance for something like what Transmeta Crusoe did. Accesses to shared state, ie loads and stores, went to an intermediate buffer, and coherency was checked inbetween blocks. If the block had to be interrupted then the results could be flushed and it could be emulated interpretively; otherwise the buffers can write back out to real state. This is kind of a higher level extension of what pipelines already do, for handling cache misses and branch mispredictions. To accommodate emulation of multiple sources memory accesses could be logged and stores queued, and the logs can be compared against the stores from several sources in order to determine synchronization issues. Unfortunately, fairly large buffers (at least dozens if not hundreds of KB) would be necessary before this to work out to much of a win, and then synchronization errors would be a bigger problem.
 
Last edited by a moderator:
In case you missed this article about Godson: http://iccd.et.tudelft.nl/2009/proceedings/305Hu.pdf
I might have already linked it, can't remember...

Anyway I'm ready to bet such hardware assistance won't be widespread and only limited to x86, after all customers only care about x86.

Oh and I again agree with all what you wrote :)
 
Laurent said:
In case you missed this article about Godson: http://iccd.et.tudel...dings/305Hu.pdf
I might have already linked it, can't remember...

Anyway I'm ready to bet such hardware assistance won't be widespread and only limited to x86, after all customers only care about x86.

Oh and I again agree with all what you wrote :)

I read a different Godson paper. This one listed the new instructions more in detail and compared output between a really naive recompiler and one with the new instructions to show how effective it is. Kind of reminds me of a paper I recently read by the Gemulator guy where he was trying to show that interpretation is as good or better than dynamic recompilation by comparing his modified Bochs to some old qemu build.

I always find these emulation/virtualization papers siting X% of native speed a little suspect, since they tend to not benchmark very real world things.

Anyway, about the x86 support instructions in Godson - some of them are indeed very x86 specific, but I do recall some being at least somewhat useful for emulating other platforms. So even if support is done specifically for improving x86 emulation it could still have some side benefits. I also wouldn't write off the possibility of ARM adding instructions specifically to improve emulation; actually, in a sense, they already have, with Thumb-EE. They could very well extend this further, although it'd probably take someone actually using Thumb-EE to show the benefit.
 
Last edited by a moderator:
Lunatic said:
PS2 emulation is probably another generation of SoCs away.
It's more like half a dozen or so generations away. The system requirements for PCSX2 are pretty steep - 2.4GHZ C2D, Nvidia 8800 GPU. The Pandora is roughly in the same league as a PII system from a decade ago. It's going to be quite a while before low-power handheld SoCs have the sort of power required by PCSX2.
 
Last edited by a moderator:
Chip said:
Lunatic said:
PS2 emulation is probably another generation of SoCs away.
It's more like half a dozen or so generations away. The system requirements for PCSX2 are pretty steep - 2.4GHZ C2D, Nvidia 8800 GPU. The Pandora is roughly in the same league as a PII system from a decade ago. It's going to be quite a while before low-power handheld SoCs have the sort of power required by PCSX2.
I meant that as an additional generation on top of the next one where we might have good Dreamcast/PSP emulation. But yeah, I guess even that is a bit optimistic.
 
Last edited by a moderator:
Awakening said:
If CPU power doubles every 18 months then a Pandora 4.5 years from now would have an 8 times more powerful CPU, 16 times in 6 years. Of course by then the PS4 and the next Xbox and Nintendo systems will be out.

Imagine what is almost possible today and what a Pandora 2 could do :)

The sales of 3D televisions will dictate when the next generation consoles will be released, as most all of them aim to include 3D technology into new games, and notably the use of glasses-less 3D technology. Since this thread really isn't going anywhere based on the topic question, any one have any information regarding the future of computers?
 
Last edited by a moderator:
fretfrenzy182 said:
<snip>

The sales of 3D televisions will dictate when the next generation consoles will be released, as most all of them aim to include 3D technology into new games, and notably the use of glasses-less 3D technology. Since this thread really isn't going anywhere based on the topic question, any one have any information regarding the future of computers?
Well we have multicore ARM processors that will take over the netbook market and personally I think AMD's bulldozer may turn the tides toward AMD for a spell... especially if Intel actually trys to ship larabee which is a terrible design as it underproforms in all cases while AMD will have double integer proformance and powerful on die GPU (think about how fast the radeon 3200 is nearly as fast as some dedicated cards 8400gs iirc then move it on the cpu die for improved cooperation with the cpu) in later revisions of Bulldozer while intel has never ever proved themselves in that arena

As far as consoles I don't think larrabee will ever be in a console in its current design... since its competitors PPC64 and x64 will be vastly superior by the time its is viable I also wonder about SPARC64 while it usually has lower clockspeeds its is far better at multithreading there is also a super computer project fujitsu is working on using them integrating a SPARC64 into a game console to reduced chip prices would seem the thing to do(If I were fujitsu and oracle that is....). Cooperation of AMD with fujitsu would be interesting as hitachi droped out of the japanese supercomputer project as the the vector processor designer leaving it a lopsided integer monster... I think AMD/ATI may fit the bill to provide vector units especially with the split into Global Foundries

Also wouldn't sparc be able to handle some of these emulation problems with its register windows for fast switching between threads? or is MIPS full acess design of the registers that much better? or perhaps I am completely off base on that one. (I should probably go back to reading the sparc spec now :)

I also don't think Sony coming up with a new CPU design on thier own would be financially viable for them...
 
Last edited by a moderator:
cb88 said:
Well we have multicore ARM processors that will take over the netbook market

I don't know about that. People like their netbooks with Windows, as the market continues to prove.

cb88 said:
and personally I think AMD's bulldozer may turn the tides toward AMD for a spell... especially if Intel actually trys to ship larabee which is a terrible design as it underproforms in all cases while AMD will have double integer proformance and powerful on die GPU (think about how fast the radeon 3200 is nearly as fast as some dedicated cards 8400gs iirc then move it on the cpu die for improved cooperation with the cpu) in later revisions of Bulldozer while intel has never ever proved themselves in that arena

Do you have any links on Bulldozer and Larabee benchmarks? I haven't seen anything on either, and I suddenly feel very out of touch with current x86 news :/ I thought Larabee looked ridiculous at first too, but I eventually determined that the x86 was just there as flow control glue and legacy applications and that the new SIMD was the real meat of the computational power. I can't say if Larabee will completely mess up or not, but I for one am quite pleased to see scatter/gather operations finally show up in mainstream CPU SIMD. I hope NEON can get something like that down the line.
 
Last edited by a moderator:
Google is planning their own branded netbooks with Chrome OS, such an endeavor could greatly boost the ARM marketshare. http://www.engadget.com/2009/12/28/googles-chrome-os-based-netbook-specs-leak-out-look-good/
 
@Exophase I believe larrabee was around 1 TFlop (I haven't seen any numbers higher than that) while GPUs are already pushing 5Tflops and are also becoming more genericThat was why I liked AMDs idea of improving integer performance on the CPU and leaning on the GPU for raw fpu power.... I mean it make a lot of sence whereas a 48 core superscalar non out of order pentium does not make much sense at all... it would make it too generic to be of any use except for perhaps crypto stuff where the tilera36 , 64 and 128 already provide a solution.

SPARC has kind of grown on me though(perhaps too late).... I like how any company can produce its own core with out license issues with their own extensions even even though MIPS may have some design advantages in a few area.

@ fischju2000 I agree even though I'm not fond of java and thus dalvik hopefully c code support will be allowed on such devices otherwise I think they will flop even if there are a lot of dalvik apps. I do find dalvik interesting though as it claims to be more efficient than java...
 
Back
Top