Mupen64Plus


http://www.youtube.com/watch?v=ZggGMLBr2Oo
Running on an ipaq (different ARM based system) using software rendering, but it proves that it works
 
WizardStan said:
http://www.youtube.com/watch?v=ZggGMLBr2Oo
Running on an ipaq (different ARM based system) using software rendering, but it proves that it works
alrighty :) seems alot of optimalisations needs to be done though.
 
Last edited by a moderator:
borgqueenx said:
WizardStan said:
http://www.youtube.com/watch?v=ZggGMLBr2Oo
Running on an ipaq (different ARM based system) using software rendering, but it proves that it works
alrighty :) seems alot of optimalisations needs to be done though.
That's a freaking IPAQ... There probably aren't that many optimizations you could do...

On the Pandora, on the other hand, the emulator will have access to the SGX, so there things will go a lot faster (they probably already do).
 
Last edited by a moderator:
dflemstr said:
borgqueenx said:
WizardStan said:
http://www.youtube.com/watch?v=ZggGMLBr2Oo
Running on an ipaq (different ARM based system) using software rendering, but it proves that it works
alrighty :) seems alot of optimalisations needs to be done though.
That's a freaking IPAQ... There probably aren't that many optimizations you could do...

On the Pandora, on the other hand, the emulator will have access to the SGX, so there things will go a lot faster (they probably already do).
oh yeah and i was stopid to oversee that the IPAQ used software rendering. I also heard that was bad.
Sorry!
 
Last edited by a moderator:
.... lol, IPAQ.

My high school had so many of those sitting around, but nobody wanted them.
 
dflemstr said:
That's a freaking IPAQ...
I was going to reply with a post starting with those exact words before i realized there was a 20th page :p. The IPAQ actually performs better than i expected.
 
Last edited by a moderator:
borgqueenx said:
dflemstr said:
borgqueenx said:
WizardStan said:
http://www.youtube.c...h?v=ZggGMLBr2Oo
Running on an ipaq (different ARM based system) using software rendering, but it proves that it works
alrighty :) seems alot of optimalisations needs to be done though.
That's a freaking IPAQ... There probably aren't that many optimizations you could do...

On the Pandora, on the other hand, the emulator will have access to the SGX, so there things will go a lot faster (they probably already do).
oh yeah and i was stopid to oversee that the IPAQ used software rendering. I also heard that was bad.
Sorry!

No offense, but you should really stay out of the development thread. If you want to discuss Ari64's work, do so in Pandora General.
 
Last edited by a moderator:
I tried v5 of the svn gles2N64 and linked direct to the EGL lib, it sent gave that 2D PVR memory linking error with the glsl compier lib. I could add back in my changes to link with dsym, it seems to help get around this issue.
 
Adventus said:
I had a look into porting one of the various plugins. I don't think we will get great performance with a direct port. So all those expecting a video of fullspeed N64 should have some patience. Looking at Glide64 i'm surprised it works very well even on x86, it seems to begin/end render a lot of geometry a triangle at a time... like this:

for(i = 0; i < num; i++){
glBegin(GL_TRIANGLES)
glTexCoord2f(...); glVertex4f(....);
glTexCoord2f(...); glVertex4f(....);
glTexCoord2f(...); glVertex4f(....);
glEnd();
}

This is generally very inefficent and i can see no reason why this is neccessary. It also does most vertex transformations in software instead of using modelview matrix, etc. It does however do some nice things, for instance the glide wrapper dynamically generates fragment shaders to emulate blend modes. I actually compiled Glitch64 (the Glide Wrapper) with my OGL wrapper, but i'm currently just getting a black screen.

My biggest worry is we'll hit some crucial feature which is simply not present in OGLES 2.0 or EGL, I've already hit a few features in Glitch which aren't supported. I think it might be better if i try my wrapper on rice_video or something a little bit better written, either way it probably wont be fullspeed until a native port is done.


Excuse me but:

* If you haven't noticed, Glitch64 is a Glide3x to OpenGL 1.5 wrapper. There is a reason why things are done the way they are done. If you have optimization ideas, I would love to hear how you will do things.

*
Code:
for(i = 0; i < num; i++){
glBegin(GL_TRIANGLES)
glTexCoord2f(...); glVertex4f(....);
glTexCoord2f(...); glVertex4f(....);
glTexCoord2f(...); glVertex4f(....);
glEnd();
}

Is this referring to the actual plugin or the wrapper? I had ZERO part with the actual plugin, so don't you start on me, please.
* Blame ziggy for those modelview matrix things with render to texture, I had no part in the rewrite to use glCopyTexImage2D.
* For unsupported things? You can omit FBO support or NPOT texture support, if you really want to...
* The main developer DOES NOT want to do a OpenGL 2.0 port. And I don't want to either, since people will bitch since they have a major issue with me.

There is a simple reason why I am so upset and critical: I consider things being attacked on which I worked on a couple of years for non stop, as a equivalent as a attack on my sexuality. So there.

Maybe thats also a reason why I am universally despised...

anyway, nice work on the glN64 port. That's a better porting target, anyway. Unless you love the idea of a rewrite of Glide64.
 
Last edited by a moderator:
* If you haven't noticed, Glitch64 is a Glide3x to OpenGL 1.5 wrapper. There is a reason why things are done the way they are done. If you have optimization ideas, I would love to hear how you will do things.
Yes i know, perhaps i was a bit harsh. I also tend to say stupid things.

Is this referring to the actual plugin or the wrapper?
That appears to be the plugins fault. I was logging the commands for awhile and there was a huge number of grDrawTriangle calls without any state changes inbetween.

If you have optimization ideas, I would love to hear how you will do things.
If they're supported on the hardware your targetting, I would suggest using vertex arrays to buffer the vertices until a state change. The same applies to the grDrawVertexArray() stuff, convert it to an ogl format then render. Some drivers might be good at how your doing it but i wouldn't count on it.

* For unsupported things? You can omit FBO support or NPOT texture support, if you really want to...
I believe OpenGL ES 2.0 has FBO and NPOT support, at least in a partial sense. Its the write/read depthbuffer that i cannot support.

There is a simple reason why I am so upset and critical: I consider things being attacked on which I worked on a couple of years for non stop, as a equivalent as a attack on my sexuality. So there.
Haha :) fair enough.
 
ts the write/read depthbuffer that i cannot support.

Ah, yeah, that can be a issue. Not a easy fix either for the FBO depth buffer support, which needs to happen. Its bad enough that ATI cards have issues with the glCopyTexImage2D copy path, let alone the inherent weaknesses FBOs have with depth.

That appears to be the plugins fault. I was logging the commands for awhile and there was a huge number of grDrawTriangle calls without any state changes inbetween.

In that case, feel free to tell Gonetz. He is the one doing those on his end.

Haha :) fair enough.

Sorry if I came off as harsh in the post I made, I tend to be quite critical when someone criticizes what I do, even if I didn't do it. In time, you'll understand :)

If they're supported on the hardware your targetting, I would suggest using vertex arrays to buffer the vertices until a state change. The same applies to the grDrawVertexArray() stuff, convert it to an ogl format then render. Some drivers might be good at how your doing it but i wouldn't count on it.

..and thats the problem. We want consistency. Though, its bad enough we have to do texture format hacks since ATI drivers completely suck, whereas Nvidia cards work as expected.
 
Adventus said:
I believe OpenGL ES 2.0 has FBO and NPOT support, at least in a partial sense. Its the write/read depthbuffer that i cannot support.

How come? I thought OGL ES2 did support this. If you're worried about SGX not supporting it then it does, although it's of course less efficient than the normal mode of operation.
 
Last edited by a moderator:
How come? I thought OGL ES2 did support this. If you're worried about SGX not supporting it then it does, although it's of course less efficient than the normal mode of operation.
Yea I'm a bit unsure about this. One thing i know is OGLES2 doesn't support writing to the depth buffer in the fragment shader, It also doesn't support glReadPixel() with GL_DEPTH_COMPONENT. However the raw FBO support is actually quite good (assuming there isn't any bugs). I'm not sure how critical these things are for Glide64.

..and thats the problem. We want consistency. Though, its bad enough we have to do texture format hacks since ATI drivers completely suck, whereas Nvidia cards work as expected.
You might be thinking of vertex buffer objects... I think Vertex Arrays were added to the standard spec in OpenGL 1.1 and before that could be accessed via the EXT_vertex_array extension. They should be supported on pretty much everything. But its your call, i cannot guarantee that it will be faster.

Once i get gles2n64 working on an actual Pandora, i might take another look into porting Glide64/Glitch64... it'll probably be a bit easier now that i've fixed some bugs in my wrapper.
 
One thing i know is OGLES2 doesn't support writing to the depth buffer in the fragment shader, It also doesn't support glReadPixel() with GL_DEPTH_COMPONENT. However the raw FBO support is actually quite good (assuming there isn't any bugs). I'm not sure how critical these things are for Glide64.

They are critical for hardware framebuffer emulation. Especially the FBO/glCopyTexImage2D support and the depth writes it does with it. Some games render to the depth buffer for some special effects (Legend of Zelda Majora's Mask and Resident Evil 2). Many more games render to texture for framebuffer effects. So its pretty important for speed reasons on PCs. I'm not sure on the graphics bandwidth on your target though (like on the XBox, there is no issue, due to unified RAM).
 
mudlord said:
They are critical for hardware framebuffer emulation. Especially the FBO/glCopyTexImage2D support and the depth writes it does with it. Some games render to the depth buffer for some special effects (Legend of Zelda Majora's Mask and Resident Evil 2). Many more games render to texture for framebuffer effects. So its pretty important for speed reasons on PCs. I'm not sure on the graphics bandwidth on your target though (like on the XBox, there is no issue, due to unified RAM).

It's unified here. What could affect things is overhead due to the driver copying things and possibly changing formats.

When you say render to the depth buffer, do you mean that they do it mid-frame such that the later primitives are effected by it, or just that they render to a framebuffer then use it as the depth buffer the next frame? Emulating the latter should be pretty doable. The former would cause massive performance problems on SGX even if it were allowed by OGL ES2.
 
Last edited by a moderator:
Here's some info from Gonetz as to how he does Majora's Mask's Lens of Truth (which uses the depth buffer)....

How depth buffers with MM work with the LoT:

1. Render all visible objects
2. Copy depth buffer to another area
3. Render full screen rectangle with zero depth and a circle at the center with alpha compare. Circle has zero alpha and all pixels inside it are rejected by alpha compare. Thus, we have depth buffer with zero values everywhere outside the circle.
4. Render the objects, which supposed to be visible only in the LoT. Parts of the objects outside of the circle are rejected by depth compare.
5. Copy depth buffer back
6. Render snow and other foreground objects.

To emulate it I use regular frame buffer and 2 texture depth buffers. With Glide3x API it is simple. First, texture depth buffer created and used until 2. In 2. it is copied (rendered) to another area. In 5. it is copied (rendered) back from this area.

So he uses pure depth format framebuffer textures and then does RTT in that case. RE2 uses pure depth format sprites for backgrounds and the plugin renders them with HWFBE IIRC.

Which is like how Crysis does screen-space ambient occlusion: it uses depth buffer textures which are created from the current rendered frame. Many games use normal (not depth) format textures in the same manner (TV screens, motion blur, HDR).
 
mudlord said:
Here's some info from Gonetz as to how he does Majora's Mask's Lens of Truth (which uses the depth buffer)....

How depth buffers with MM work with the LoT:

1. Render all visible objects
2. Copy depth buffer to another area
3. Render full screen rectangle with zero depth and a circle at the center with alpha compare. Circle has zero alpha and all pixels inside it are rejected by alpha compare. Thus, we have depth buffer with zero values everywhere outside the circle.
4. Render the objects, which supposed to be visible only in the LoT. Parts of the objects outside of the circle are rejected by depth compare.
5. Copy depth buffer back
6. Render snow and other foreground objects.

To emulate it I use regular frame buffer and 2 texture depth buffers. With Glide3x API it is simple. First, texture depth buffer created and used until 2. In 2. it is copied (rendered) to another area. In 5. it is copied (rendered) back from this area.

So he uses pure depth format framebuffer textures and then does RTT in that case. RE2 uses pure depth format sprites for backgrounds and the plugin renders them with HWFBE IIRC.

Which is like how Crysis does screen-space ambient occlusion: it uses depth buffer textures which are created from the current rendered frame. Many games use normal (not depth) format textures in the same manner (TV screens, motion blur, HDR).

Couldn't this just be done with the stencil buffer?
 
Last edited by a moderator:
Apparently SGX doesn't support OES_depth_texture on iPhone 3GS, maybe someone can confirm if it doesn't here >_> I hope it does later, because the hardware is capable of rendering depth buffers externally..

One thing that'd be especially nice is a way to do purely custom pixel formats in shaders. Then it'd be pretty doable to output depth buffers instead of color buffers, or both using a 64bpp framebuffer. Does anyone know if this is already possible somehow?
 
Back
Top