Pandora How Much Performance Can Give The Pandora Gpu


QUOTE
http://www.youtube.com/watch?v=kvci1vTXyUo
N95's not that old (and it was before it time when it came out). Its an Omap 2420.... aka PowerVR MBX, i think the ngage is an Omap 1. The Quake 3 videos for iPhone are also somewhat representative.... apparantly its equivalent to the Omap 2420 with an ARM1176 (higher clocked, but not nearly close to a Cortex A8) instead of ARM1136. The N95 is slightly better in GPU performance according to http://www.glbenchmark.com/phonedetails.jsp?D=Apple%20iPhone.

Quake3 iPod Touch: http://www.youtube.com/watch?v=kvci1vTXyUo
 
Last edited by a moderator:
OMAP 2420 benchmarks

Here are some OMAP 2420 figures from real world benchmarks, OMAP3 more than doubles the clock speed as well as the architectural improvements so that should allow some educated guesses as to what performance might be in OMAP3.
 
Is the new OMAP so Super-Secret or why are true Benchmarks so rare? :) Or is it just because there is no Device for running Benchmarks? Ok, Maybe there are no Benchmark Programms yet, would be great if there is a programm like 3D Mark for ARM Plattforms.
With the Pandora we finaly can compare the Speed of the Device. I really want to run a Q3 Timedemo onto my (1.1GHz, GeForce3) PC and the same Demo onto the Pandora and then see how it looks. :D
 
fusion_power said:
Or is it just because there is no Device for running Benchmarks? Ok, Maybe there are no Benchmark Programms yet, would be great if there is a programm like 3D Mark for ARM Plattforms.
I think the Phoronix benchmark suite could be adapted for arm.
 
Last edited by a moderator:
sindbad said:
fusion_power said:
Or is it just because there is no Device for running Benchmarks? Ok, Maybe there are no Benchmark Programms yet, would be great if there is a programm like 3D Mark for ARM Plattforms.
I think the Phoronix benchmark suite could be adapted for arm.
Theres 3d mark mobile 06, which is what my link referenced... unfortunately as yet only results for OMAP2 have been published... hopefully soon there will be results posted for the 3420/2430 like Pandora uses.... certainly expect ~10MTri/sec though the fillrate gamescene tests are quite hard to judge but should be a minimum of 2x performance.
 
Last edited by a moderator:
Thought I would add something to this:

PowerVR advertises that SGX530 can render 1.2GPixels/s at 200MHz. As far as I'm concerned this is more or less an outright lie. There are two texture units/pipelines, so the actual raw number that it can put out is 400MPixels/s, which is then multiplied by 3 because an overdraw rate of 3.0 is assumed. This number might apply to some popular software, but I don't really think it's fair to quote fillrate based around that.

Anyway... as a comparison, Dreamcast uses similar technology and runs at 100MHz, but it only has one texture unit. So it has a fillrate of 100MPixels/s, and games routinely render to 640x480. As far as we're aware, Pandora's GPU will run at 110MHz, so it will have a fillrate of 220MPixels/s. 800x480, if all of it is rendered to, is 25% more pixels than 640x480. The other technology (triangle setup and especially programmable shaders) should be similary much better on SGX530, so I think it's probably realistic to expect something that's at least moderately better than Dreamcast.

In terms of fillrate this isn't going to be a PSP killer, not even close (PSP appears to have 4 texture units, 667MPixels/s at 166MHz) but the superior deferred rendering technology will definitely help most of the time. I still don't think multiplying the fillrate at 3x is at all fair, but if we do that we get 660MPixels/sec, about the same as PSP, but there's much more resolution to worry about here.
 
The Pandora's GPU: More than enough to shake a Memory Stick at.

well OK, lame joke ^^
how about:

what's the best PS2 emulator right now?
near-100% compatibility, full-speed..
costs around £300 in hardware...

...

the PS3!! ;)
 
Exophase said:
Thought I would add something to this:

PowerVR advertises that SGX530 can render 1.2GPixels/s at 200MHz. As far as I'm concerned this is more or less an outright lie. There are two texture units/pipelines, so the actual raw number that it can put out is 400MPixels/s, which is then multiplied by 3 because an overdraw rate of 3.0 is assumed. This number might apply to some popular software, but I don't really think it's fair to quote fillrate based around that.

Anyway... as a comparison, Dreamcast uses similar technology and runs at 100MHz, but it only has one texture unit. So it has a fillrate of 100MPixels/s, and games routinely render to 640x480. As far as we're aware, Pandora's GPU will run at 110MHz, so it will have a fillrate of 220MPixels/s. 800x480, if all of it is rendered to, is 25% more pixels than 640x480. The other technology (triangle setup and especially programmable shaders) should be similary much better on SGX530, so I think it's probably realistic to expect something that's at least moderately better than Dreamcast.

In terms of fillrate this isn't going to be a PSP killer, not even close (PSP appears to have 4 texture units, 667MPixels/s at 166MHz) but the superior deferred rendering technology will definitely help most of the time. I still don't think multiplying the fillrate at 3x is at all fair, but if we do that we get 660MPixels/sec, about the same as PSP, but there's much more resolution to worry about here.
i thing the number may be true Imagination Technologies have always bases their specs on what they think is actually do-able
unlike sony QUOTE

Can the PSP GPU pull off 33 million poly/sec? Yes. Same as the PS2 could pull of 75 million poly/sec.

By that I mean it's more Sony B.S. PS2 could never pull off that number in game, with all textures, lighting effects, etc. 75 million was just pie in the sky floating, textureless polys without light sources. Same deal with PSP's proported 33 million poly/sec spec. In game, it's far, far, far less.

:rolleyes:
QUOTE

PowerVR's solutions, on the other hand, were always downplayed, much like how Nintendo's estimates of Gamecube's power were downplayed. Imagination Technologies (or whatever the outfit is called now) bases their specs on what they think is actually do-able. Dreamcast was listed as being only able to push 2.5-3 million poly/sec in game, with all effects. There are quite a few Dreamcast games that push over twice that number
 
Last edited by a moderator:
QUOTE
Anyway... as a comparison, Dreamcast uses similar technology and runs at 100MHz, but it only has one texture unit. So it has a fillrate of 100MPixels/s, and games routinely render to 640x480. As far as we're aware, Pandora's GPU will run at 110MHz, so it will have a fillrate of 220MPixels/s. 800x480, if all of it is rendered to, is 25% more pixels than 640x480. The other technology (triangle setup and especially programmable shaders) should be similary much better on SGX530, so I think it's probably realistic to expect something that's at least moderately better than Dreamcast.
I agree, thats probably a good comparison.

QUOTE
PSP appears to have 4 texture units, 667MPixels/s at 166MHz
Thats flatshading, You can cut that in half (at least) if your actually texturing something. Interesting, but i dont think these numbers hold much water when we're dealing with two fairly different rendering approaches, TBDR vs IMR. From my experience IMR tends to have very high peak throughput and much lower realworld, the SGX may scale better in this environment.

QUOTE
I still don't think multiplying the fillrate at 3x is at all fair, but if we do that we get 660MPixels/sec, about the same as PSP, but there's much more resolution to worry about here.
Eh? I dont really think you can hold the screens higher resolution against the GPU, especially when the screen sizes are virtually identical. Nearest pixel upscaling would likely be relatively free.

In terms of thoughput, I would hazard a guess that the SGX@110mhz is slightly below the PSP, but the shader support and other advanced features (Better texture compression, Deffered rendering, MRT, Onchip Anti-aliasing, Stencil & Zbuffer, etc) may push it ahead when you consider that dev's can more effectively use the available fillrate (kind of the whole idea behind the PowerVR architecture). Per mW though, i think the PSP gets its arse handed to it....

NB: Purely my opinion/speculation. Stay back trolls, or i'll use my +4 Mouse of Vengence. :)
 
It would be nice to be able to compare them (PSPGPU and PANGPU), however the tangible things that we can compare, such as Quake3 or N64 emulation will show Pandora dominating because of the massive advantage of the CPUs.
 
Exophase said:
PowerVR advertises that SGX530 can render 1.2GPixels/s at 200MHz. As far as I'm concerned this is more or less an outright lie. There are two texture units/pipelines, so the actual raw number that it can put out is 400MPixels/s, which is then multiplied by 3 because an overdraw rate of 3.0 is assumed.

Aren't you mixing pixels/s and texels/s? Perhaps are they quoting 1.2 Gpixel/s @200 MHz because SGX530 has 6 pixel shaders?

BTW the limiting factor on OMAP3530 could well be the memory bandwidth which will at the very max be 166 Mhz * 2 * 4 = 1.3 GB/s. If one uses 16 bit/pixel the maximum fillrate would be ~600 Mpixel/s.
 
Last edited by a moderator:
jbr said:
Emulation is not all about power! Read the faq! Just because a system is x times as powerful as another, it doesn't mean that it can be emulated at 1/x, or even 1/(10x) speed! PS2 emulation is impossible because no portable emulator exists, not because it not powerful enough. (If one DID exist, though, I would imagine speed would be a bit of an issue.)
wrong. if not portable emulator exists, you make one. however, one cannot be made for the pandora simply because it is not powerful enough
 
Last edited by a moderator:
Laurent said:
Aren't you mixing pixels/s and texels/s? Perhaps are they quoting 1.2 Gpixel/s @200 MHz because SGX530 has 6 pixel shaders?
I haven't heard anything about the number of shaders (which are unified) but fillrate is fillrate, it's not not pixel shader computational throughput. Based on what has been said at other forums I fully expect that the fillrate figure comes from the texture units * overdraw.

Laurent said:
BTW the limiting factor on OMAP3530 could well be the memory bandwidth which will at the very max be 166 Mhz * 2 * 4 = 1.3 GB/s. If one uses 16 bit/pixel the maximum fillrate would be ~600 Mpixel/s.
Right - since it's only capable of putting out 220 Mpixel/s a sec that doesn't even come anywhere close to factoring in unless the other things attached to the bus are taking a big share of that. I'm expecting the decent amount of L2 cache on the Cortex-A8 to leverage this at least more than would be the case in a typical handheld w/o any L2 cache (although I still think it's a shame there isn't any dedicated memory for the GPU aside from texture cache). At the very least, the tile based rendering should mean that latency isn't a huge issue, I hope, so it can probably not waste a ton of bus cycles there.

BTW, can you give a good idea of exactly what the clock configuration is like, what's divided off of what... if the CPU is clocked at 900MHz what else changes?
 
Last edited by a moderator:
QUOTE
BTW the limiting factor on OMAP3530 could well be the memory bandwidth which will at the very max be 166 Mhz * 2 * 4 = 1.3 GB/s. If one uses 16 bit/pixel the maximum fillrate would be ~600 Mpixel/s.
Although, hopefully with the awesome texture compression you wont ever need to use 16 bit textures. Heres an example:

Original -> ETC 4bpp
textureoriginalee4.gif
textureetc4bppac3.gif


PVTC 2bpp -> PVTC 4bpp
texturepvtc2bppkz0.gif
texturepvtc4bpphm2.gif


Looks pretty good, although it may have just worked well with that particular texture.

NB: The original was a gif, so the fact i converted them to gifs made very little difference.
 
Even if you do use 16bit textures it won't make that much of a difference in quality/performance anyways, considering the GPU has 32bit internal true color (ITC). But texture compression definitely needs to be taken into account.
 
Knux said:
Even if you do use 16bit textures it won't make that much of a difference in quality/performance anyways, considering the GPU has 32bit internal true color (ITC). But texture compression definitely needs to be taken into account.
But with so many things accessing SDRAM you really need to minimize fetches where possible. Hopefully the textures are stored compressed in texture cache (unlike PSP...) so that it'll reduce overall amount of fetches that have to go outside of it.
 
Last edited by a moderator:
QUOTE
Hopefully the textures are stored compressed in texture cache (unlike PSP...) so that it'll reduce overall amount of fetches that have to go outside of it.
This appears to imply that the textures are stored compressed:
PowerVR SDK said:
A potential texture cache bottleneck can be identified by reducing the texture size, reducing the number of textures, changing the texture format (float to integer or compressed), and/or reducing the texture filter settings. As general advice, avoid performing very random texture accesses; these are especially expensive when doing dependent per-pixel perturbed texture reads. Highly random accesses will thrash the texture cache. Use mipmapped textures whenever possible. Also use compressed textures which will fit better in cache memory.
There's a MADD after the main texture cache which acts as a decompressor and level 0 cache.
 
Last edited by a moderator:
I'm just catching up with this thread, looks like lots of people have been reading up and running some simple calculations, and you seem to be on the money... ie SGX530 @ 200MHz = 2 pipes x 200MHz x overdraw = 1.2 Gigapixels/sec and so similarly that needs to be scaled down to the OMAP clock speed unless the graphics can be overclocked by that much ;)

From my experience though, fillrate or memory bandwidth are not likely to be the bottlenecks due to the TBDR architecture (though bandwidth might be stretched if you use uncompressed textures). In most games or advanced tests you are likely to be limited by the shaders and any optimisations in the shader code are likely to yield big performance increases.

Bottom line though a TBDR will get much nearer the peak throughput figures given in the marketing documents than an IMR which is why the performance per mW is so much higher and memory bandwidth is so much less of an issue.

Does anyone know what the shader performance of the PSP was quoted as? ie how many vertex and pixel shaders, presumably not unified?
 
Exophase said:
Thought I would add something to this:

PowerVR advertises that SGX530 can render 1.2GPixels/s at 200MHz. As far as I'm concerned this is more or less an outright lie. There are two texture units/pipelines, so the actual raw number that it can put out is 400MPixels/s, which is then multiplied by 3 because an overdraw rate of 3.0 is assumed. This number might apply to some popular software, but I don't really think it's fair to quote fillrate based around that.

Anyway... as a comparison, Dreamcast uses similar technology and runs at 100MHz, but it only has one texture unit. So it has a fillrate of 100MPixels/s, and games routinely render to 640x480. As far as we're aware, Pandora's GPU will run at 110MHz, so it will have a fillrate of 220MPixels/s. 800x480, if all of it is rendered to, is 25% more pixels than 640x480. The other technology (triangle setup and especially programmable shaders) should be similary much better on SGX530, so I think it's probably realistic to expect something that's at least moderately better than Dreamcast.

In terms of fillrate this isn't going to be a PSP killer, not even close (PSP appears to have 4 texture units, 667MPixels/s at 166MHz) but the superior deferred rendering technology will definitely help most of the time. I still don't think multiplying the fillrate at 3x is at all fair, but if we do that we get 660MPixels/sec, about the same as PSP, but there's much more resolution to worry about here.
I think i should probably answer this as well really....

I'm afraid a TBDR is not well suited to standard metrics of graphics performance... Fillrate makes perfect sense for an IMR and is a good measure of real world performance (obviously ignoring other potential bottlenecks) however for a TBDR how do you measure fillrate?? You can obviously either measure:
1) the number of visible pixels that can be drawn per second
2) the number of pixels that can be drawn x by some factor of overdraw
3) the maximum number of pixels per second that can be resolved by z-testing per second.

Clearly the third option would yield a ridiculously high fillrate but that wouldn't be realistic and wouldn't represent real world performance unless you had a vast amount of overdraw... similarly option 1 would be far too pessimistic and also wouldn't represent real-world performance... so i'm afraid option 2 is the only sensible option although i definitely agree it is not nice to have to resort to a magic "factor" in the calculation however this is the only way to get accurate and representative figures i'm afraid.
 
Last edited by a moderator:
Back
Top