I Want To Compare The Pandora Video Card With A Pc Video Card


renejr902

Active Member
Joined
Apr 19, 2008
Messages
767
Age
44
Location
CANADA, Montreal
Website
Visit site
i want to compare the pandora graphic card with a PC graphic card

I am a PC Gamer. so i know most video cards since the beginning of 3d era.

So can you tell me which video card is similar to the pandora video card.

i will offer some choice, but dont hesitate to choice your own choice. i want the truth.

i know the pandora video card is opengl 2.0, so it must be good.

1. Voodoo 1 4mb
2. Matrox mystic 4 mb
3. Voodoo 2 8mb
4. Voodoo banshee 16mb
5. Ati rage pro 8mb
6. Ati rage pro 128
7. Tnt 1
8. Tnt 2
9. Geforce 1
10. diamond stealth 4mb s220
11. Savage 4 8mb
12. Savage 3D 2mb
13. Geforce 295 gtx
14. Radeon
15. The dreamcast video card: ( PowerVr 2)
16. Ati 9000 radeon
17. Ati radeon 7000
18. N64 video card.

i know im dreaming LOL. but try to find a similar pc video card in term of PERFORMANCE, if any exist


i have a question 2. How much ram have the pandora video card ? 2mb ? 4mb ? 8mb ? with or without compressed texture ?

thanks for answer.
 
The Pandora has 256MB of RAM shared between the CPU and GPU.
It does support some kind of compressed textures, but they're proprietary and confusing. They probably expect me to make a developer account and download some kind of 'tool' that compresses the textures. Dicks.

Now, I heard "GeForce 6600" somewhere, but I can't remember in what context. Maybe it's just a bit under that?
I don't think you can directly compare it to a PC card, since it has newer shading features, but the texture throughput is probably slowed down by the RAM.
Anyway, I don't know much about it, but the first sentence up there is accurate. :/
 
wow. im very surprised. a geforce 6600 is a very strong video card. even a geforce 6200 would be awesome.

i really thought it has the performance of voodoo 2 or 3 at best or radeon 7000, 7500 or Geforce 1 or Geforce 2.

But GEFORCE 6600 ??? Nice to know. About shaders i suppose its very bad, does it have shaders 2.0 ? 1.0 ?

i know the geforce 4000 series dont have any shaders, if i remember correctly

i said it again IM VERY SURPRISED geforce 6600 ????????????? it surely a bit under that.
 
I literally just heard that somewhere, I can't even remember if it was "No the Pandora is way below that" or "The Pandora is about on par"

But OpenGL ES 2 is completely shader-based, so I think it will have better shaders than some older PC cards, but not as much texture or vertex output.
For example, here's a shader demo running on OMAP3 hardware:

http://www.youtube.com/watch?v=24TXpqa9jG0

It doesn't say exactly what hardware is running that, but it's OMAP3, so the Pandora should be able to do cool shading effects like that, even though in other measures it's worse than older PC cards.

http://www.youtube.com/watch?v=wnr4HCIoFlk&feature=related

This one is running on actual Pandora hardware. We're definitely going to have strong shader support on Pandora, if nothing else.
 
PVRTC is not that proprietary. You can find public explanations of how it works, which isn't really that complicated. I'm sure someone else will make their own encoder, if they're all interested.

I very sincerely doubt GeForce 6600 level. The very worst GF6600 (LE) has 4 pixel pipelines (assumedly each with a Z-unit, TMU, pixel shader, and ROP) and 3 vertex shaders, a 128-bit interface to 500MHz GDDR3 RAM, and a 300MHz clock speed. From what I understand, the shaders can operate on 4 32-bit vertices floating point simultaneously (32x4 SIMD) - someone can correct me if I'm wrong.

SGX, on the other hand, has 2 TMUs/ROPs and 2 unified shaders, with a 110MHz core clock and a 32-bit interface to 166MHz DDR RAM (yes, the 166MHz should be what it interfaces it at, not the 110MHz). The shaders can only operate on one float at a time, but can operate on at least 3 color components simultaneously if they're 10-bit. It's unclear to me whether or not it can do 4 (meaning it'd have 40-bit registers instead, which sounds familiar for some reason...)

There's no question that the TBDR architecture saves on both memory bandwidth and fillrate, and the USSE architecture also has very nice latency hiding techniques. But this can't overcome the sheer difference in raw power we're looking at, especially if what's most interesting is raw pixel shader throughput. SGX gives some leverage by allowing the vertex shading to go lightly utilized or entirely unutilized by performing this on the CPU. But even if you fully utilize the 2 USSEs for pixel shading then you're still at a heavy disadvantage against 4 vector shaders running at nearly 3x the clock speed, even if the IMR based renderer generates 3x as many of them. Now, what you'll be seeing (what IMG, Apple, etc are strongly encouraging) are applications using 10bit fixed point color components instead of 32bit floats, where the USSEs will at least be able to do a more similar amount of work per cycle. But then it'll no longer be a fair comparison since you'll be seeing a reduction in image quality.

Anyway, in spite of all that, this list looks surprisingly straightforward to enumerate. For the most part.

SGX is easily better than:

1. Voodoo 1 4mb
2. Matrox mystic 4 mb
3. Voodoo 2 8mb
4. Voodoo banshee 16mb
5. Ati rage pro 8mb
6. Ati rage pro 128
7. Tnt 1
8. Tnt 2
9. Geforce 1
10. diamond stealth 4mb s220
11. Savage 4 8mb
12. Savage 3D 2mb
15. The dreamcast video card: ( PowerVr 2)
17. Ati radeon 7000
18. N64 video card.

I dropped "Radeon" because that's totally ambiguous.

GeForce 295 GTX is a joke, right? I hope it's a joke? Obviously the SGX would get torn into a million little pieces.

Radeon 9000 is the only tough one. If you limited the SGX to doing things exactly as the Radeon does and no more then I think the Radeon could probably win, but it'd be so crippled that it's really far from a fair comparison. Given the lower resolutions expected and the TBDR I think that even if the SGX has less brute force power it can still probably deliver better looking games without much problem.
 
fischju2000 said:
I also believe I heard it compared to a GeForce 4 MX in the chips lowest spec, bandwidth. But ~GeForce 6600 for calculations it's best at.

If the pandora's video capabilities are geforce 4 fillrates mixed with the shaders of a 6600, that could make for some decent quality games. I've seen videos of Quake 3 running at something like 50 FPS or so, I'd be interested in seeing where this goes.
 
Last edited by a moderator:
thanks exophase for a great answer again, even if i only understand 30% LOL You're really a genius, i read your posts since one year and im impressed a lot. you answer all topics with a lot of details. i understand now how you got that great performance of your gba emu. you're really a genius

Anyway, if the pandora video card is better than a geforce 1 and not far from a radeon 9000 or Geforce 4 mx, Im really impressed for a small device like this. Wow impressive.

and yes the geforce 295 gtx was a joke.

its a surprising day for me today, i really thought the pandora video card was similar to a voodoo performance or tnt card with few shaders.

i suppose the performance is similar to geforce 6200, that great enough. even the 6200 can run elder scroll iv :eek:blivion on a PC. LOL
 
Off topic, but I remember when I was a yongun and my vodoo3 arrived, I was so excited OMG.
 
Exophase said:
SGX is easily better than:

1. Voodoo 1 4mb
2. Matrox mystic 4 mb
3. Voodoo 2 8mb
4. Voodoo banshee 16mb
5. Ati rage pro 8mb
6. Ati rage pro 128
7. Tnt 1
8. Tnt 2
9. Geforce 1
10. diamond stealth 4mb s220
11. Savage 4 8mb
12. Savage 3D 2mb
15. The dreamcast video card: ( PowerVr 2)
17. Ati radeon 7000
18. N64 video card.

Woah, easily better then a TNT 1, TNT2, and a GeForce 1? :eek: I was thinking it was along the same power as a TNT 1, which I had when quake 3 came out, and ran it at full graphics at around 40~60fps @ 640 x 480 16bit color :eek: Not quite what I'm seeing with Quake 3 on the SGX (some videos show it using vertex lighting instead of lightmaps, and texture detail a notch away from highest, and geometric detail on medium. Basically the default q3 settings) with a fps around 30.

TNT1 has a 90MHz core too. Well this sure is good news! Means Quake 3 isn't near 100% optimised for the SGX, right? :p
 
Last edited by a moderator:
It is practically IMPOSSIBLE to compare with PC hardware.

First of all, SGX is built for mobile devices. Meaning that it focuses on the performance per watt instead of raw performance. The microarchitecture is very much different because of the reason mentioned above. There is no Direct X support(so no Shader Models) as far as I know. Amount of RAM does not mean a thing, because the texture sizes are so much smaller because of the weaker core.

There are few things which I'd like to know about SGX and Pandora: Memory bandwidth, clocks and bus width. GPU core clock, information of the core, what kind of FPU/Vector performance can be expected and how the microarchitecture works. Also some words about the drivers.

Frankly, I haven't studied this at all myself, just popped to this thread. So yeah, going to have a great google-fest now. :)
 
It's perfectly possible to compare speeds to desktop cards, simply run games that run on both arches and compare numbers, that'll give you an idea and that's what the original poster was looking for. FPS is always comparable to FPS.
 
Enverex said:
It's perfectly possible to compare speeds to desktop cards, simply run games that run on both arches and compare numbers, that'll give you an idea and that's what the original poster was looking for. FPS is always comparable to FPS.
Nope, you have to take screen sizes and memory constraints + memory speeds into consideration if you just go by FPS. FPS is a horrible performance assessment tool, and you should never use it. This is not just my personal opinion; there have been multiple scientific articles about it :p
 
Last edited by a moderator:
How much is known about the SGX 530? Is there a chance thats its an SGX 535 with 2 pixel pipelines turned off and a lower clock?
 
greendots said:
How much is known about the SGX 530? Is there a chance thats its an SGX 535 with 2 pixel pipelines turned off and a lower clock?
Wouldn't matter, since the driver is closed source.

But oh man, it would be wonderful if it wasn't and if we were able to utilize those pipelines (assuming, of coruse, that there actually *is* a 535 inside)...
 
Last edited by a moderator:
I would think the SGX 535 would perform similarly to a 6200 LE/TC, with programs designed and optimized specifically for it.

In other words, graphics at the level of KOTOR could in theory be possible. :)

This is mainly due to most OpenGL/DX computer games using 50% of their processing power for the basic stuff, and 50% for some fog or w/e. :p Basically, a feature deemed necessary saps away the performance, and with no room to upgrade your GPU the developer can't make that call.

But we have the SGX 530, which is half as fast?
 
dflemstr said:
Enverex said:
It's perfectly possible to compare speeds to desktop cards, simply run games that run on both arches and compare numbers, that'll give you an idea and that's what the original poster was looking for. FPS is always comparable to FPS.
Nope, you have to take screen sizes and memory constraints + memory speeds into consideration if you just go by FPS. FPS is a horrible performance assessment tool, and you should never use it. This is not just my personal opinion; there have been multiple scientific articles about it :p

That's assuming the same situation for both cards (same quality settings, screen mode, etc). 15fps in a game is 15fps on any other device. Getting 15fps on one card that is "theoretically better" than another card wont make it any more playable, it's still 15fps. That's why you bench multiple different games that use different functions to get an overview of how it operates doing different things rather than using a single game as a reference. Therefore, for the end user, FPS is an excellent benchmark as it tells them how well it'll be able to run that app (and similar other ones).
 
Last edited by a moderator:
Enverex said:
dflemstr said:
Enverex said:
It's perfectly possible to compare speeds to desktop cards, simply run games that run on both arches and compare numbers, that'll give you an idea and that's what the original poster was looking for. FPS is always comparable to FPS.
Nope, you have to take screen sizes and memory constraints + memory speeds into consideration if you just go by FPS. FPS is a horrible performance assessment tool, and you should never use it. This is not just my personal opinion; there have been multiple scientific articles about it :p

That's assuming the same situation for both cards (same quality settings, screen mode, etc). 15fps in a game is 15fps on any other device. Getting 15fps on one card that is "theoretically better" than another card wont make it any more playable, it's still 15fps. That's why you bench multiple different games that use different functions to get an overview of how it operates doing different things rather than using a single game as a reference. Therefore, for the end user, FPS is an excellent benchmark as it tells them how well it'll be able to run that app (and similar other ones).
Except that you would need the same hardware for both machines in comparison. Exactly the same CPU, same RAM chips(crucial with integrated GPUs), same software suite. Everything should be the same except the GPU.

It wouldn't make much of a difference if one slapped dual Nvidia GTX 285's to Pandora, FPS rates wouldn't be going up because the actual CPU is holding back. For sure less than a 1 GHz ARM based CPU isn't going to be faster than 1 GHz x86 based CPU(AMD Duron(Spitfire), 950 MHz), which held me back with my old GeForce 4 MX 4000(NV18b core, exactly the same as with GeForce 2 but with few tweaks and increased clock speeds).

In other words, I'd say that IF the GPU is anywhere near GeForce 2 speeds, it will be way faster than it "should" be. In a sense that the CPU will be the bottleneck in graphics heavy tasks.

A reminder, Pandora uses 800*480 resolution, which means much more raw power per pixel than with general PC's(1024*768 was quite the standard back in GeForce 2 days).

The problem/bottleneck with the SGX will most probably be the memory bandwidth rather than the raw GPU power. But yeah, I have no clue about the chip. Google says absolutely nothing about it really.
 
Last edited by a moderator:
I agree with that, you would need a "same spec" base machine to compare it if you were benching the whole machine. What I meant was that if you theoretically had the same base machine (that being the Pandora itself) then where would it clock. That's pretty much impossible to work out though. The memory will be an issue due to it being shared and not "GDDR3" backed with a multi-core CPU like everything else these days but hopefully it'll be good enough when used to its full potential.
 
Back
Top