Neon and SGX


LTStone

Active Member
Joined
Feb 19, 2011
Messages
866
Age
50
Just wondering if you guys could explain the benefits of the Pandora's Neon and SGX chips. Are there any benefits they give over modern devices, and what apps, emus do they benefit and how. I read about these and would just like to know a little more about my beloved Pandora :)
 
Bit of pedantry first.. all of this stuff is on the same chip, OMAP3530/DM3730. NEON in particular is part of the CPU, it's not a separate thing.

SGX is the GPU (graphics core), and rendering graphics is really all it's good for. On newer GPUs you may be able to offload other sorts of work to them but that's not really an option here. On Pandora using the SGX can mean a lot of overhead spent in drivers. Getting the best performance out of it is challenging, and for emulators the features exposed by the API (OpenGL ES) may not be a very good fit for what the console does, and may need too much overhead to make fit. So a lot of emulators don't use it, even if they have 3D. The only one I know of that uses it is mupen64plus. A PS1 emulator could probably use it but it wouldn't be as accurate as the renderer we have; if you look at PC PS1 emulators they have a long history of severe graphics problems and lots of hacks to try to work around them, because of using PC graphics cards. It doesn't really help that OpenGL ES 2 is often a more restrictive API than ones the PC plugins used. For 2D console emulators trying to use the SGX just isn't worth it.

DS emulation in particular is a worse case, since it has both a 2D and 3D engine and has to combine the output from both. The 2D part doesn't work well for a GPU, and if you do the 2D in software and the 3D on the GPU then you have to read back the contents of the 3D render buffer which is really slow. Some PC emulators do have GPU-based 3D emulation but they tend to offer software emulation too and it's typically faster, sometimes much faster (and much more accurate).

NEON is a set of instructions for the CPU, that let you perform multiple operations in parallel (Single Instruction Multiple Data, lookup SIMD on wikipedia). Not a lot of emulators are using it but the ones that are use it to emulate high level things like graphics and audio, or parts of the console that were already vector processing like geometry transformation.
 
The 2D part doesn't work well for a GPU, and if you do the 2D in software and the 3D on the GPU then you have to read back the contents of the 3D render buffer which is really slow.
Is reading from the render buffer blocking the cpu? If not would something like this be possible?

Render Background Image via 2D Engine on the cpu

Give the background to the gpu -> Let the gpu render some complex models

While copying do some part of audio emulation

GPU still renders models

While gpu still renders models finish Audio Emulation

GPU is done

Copy the stuff from the gpu back to CPU memory

While copying do some recompiler checks for changed code

Do the missing the 2D Operations

Copy the frame into the frame buffer
 
Last edited by a moderator:
There's not a lot of time between when you tell the GPU to start rendering and when you need what it rendered. Less than one frame. This is a direct feedback loop that games can and do monitor so you can't hide a lot of latency. The way the GPU works may have it naturally impose multiple frames of latency, so trying to get less than one frame may be impossible. It also requires that the API exposes some way to do the glReadPixels call asynchronously in the first place, which AFAIK requires PBOs which OpenGL ES 2 does not have.

That just applies to the time that'd be spent waiting for the GPU to finish rendering. If a lot of that time is spent on the CPU because the driver is being dumb with copying around buffers it can't be recovered.
 
Last edited by a moderator:
Back
Top