GP2X Gpu940...how To Start, And Compiling For Pc


allright I got it all to compile
When I try to load the gpu940 program into the 940 it sais this:

[root@gp2x gpu]$Load file 'gpu940.gpe' at offset 0
writing 44772 bytes...
Ok

then it quits... ps doesn't show any processes.

Do I have to use special compile options on the gpu940 program? or link options? I'm trying to get this to work in the standard SDK, and I think I alsmost got it working now... but doesn't seem like the program is being loaded correctly.. any suggestions?

looking at the makefile.am now.. little fuzzy for me. What about this objcopy and objdump ? I think I'm not understanding a big part lol :p
 
When I try to load the gpu940 program into the 940 it sais this:
[root@gp2x gpu]$Load file 'gpu940.gpe' at offset 0
writing 44772 bytes...
Ok

then it quits... ps doesn't show any processes.

That's expected : gpu940 is in fact a rootkit. I own your gp2x now.
:)

Now, seriously : gpu940 runs on the 940 (thus the name).
Linux don't know about it, so ps cannot show it !
The only thing that reveals its presence is the change in video mode (your gp2x's screen turned green).

Everything's fine, just run you programs now !
 
Last edited by a moderator:
well ofcourse I did try to run the gear-demo on my own compiled gpu940, but that didn't work. I don't see the screen flashing either :(
But if I use the load940 and the gpu940 from egoboo, and my own compiled gears-demo it works fine :)

So actually I can start making my openGL-game now (whohooo)

but I'm still wondering what I did wrong...

How about this '_start' symbol. It was commented out in the crt0.S file.. but then the linker (I use the standard SDK with dev-c++) was bitching about not finding a _start symbol. (that is after I set the option to not include standard 'start' libraries, or else it wants a main() function )
So I uncommented those lines in the crt0.S so I got that _start symbol in the gpu940... compiler/linker doesn't give a message anymore.. but still it doesn't seem to work.

Can you explain how to compile this gpu940 correctly??? for example.. why was the _start functions commented in the crt0.S ???

thanks for the info!
 
STTrife posted on Feb 1 2007 at 11:22 AM said:
How about this '_start' symbol. It was commented out in the crt0.S file.. but then the linker (I use the standard SDK with dev-c++) was bitching about not finding a _start symbol. (that is after I set the option to not include standard 'start' libraries, or else it wants a main() function )
So I uncommented those lines in the crt0.S so I got that _start symbol in the gpu940... compiler/linker doesn't give a message anymore.. but still it doesn't seem to work.

Can you explain how to compile this gpu940 correctly??? for example.. why was the _start functions commented in the crt0.S ???

Do you use the given Makefile or not ?

The Makefile in bin does this :

1) compile all gpu940 source files into object files
2) link everything into a standard ELF file using the linker script "script.ld"
3) extract from that elf file the raw binary copy using objcopy into the file "gpu940"

You need a linker that's compliant with the traditionnal AT&T Unix linker.

Complaints about the _start symbol probably mean your linker don't understant that, or you did not use this Makefile, or the Makefile is buggy. In the later case, send me the output please.
 
Last edited by a moderator:
No indeed I haven't used the makefile because I'm running windows and don't want to go trough the trouble of setting up an environment that would execute the makefile correct.. (I'm not that skilled with setting up unix-type development environment, I always get frustrated when trying :p )

Further I don't know if the compiler that comes with the SDK is unix compatible, and I'm not familiar with the objcopy program.

but fortunatly the precomiled gpu940 from egoboo works fine!

But is it also possible to test this opengl implementation on windows? ofcourse I can just use the standard openGL to test my game, but I'll probably come to a point where it will still work in normal openGL, but fails on the gpu940 version.

So if it's possible, do I also need to run a seperate program in windows? and if so, could you post a compiled version of that so I don't need to compile it myself? thanks!
 
Hmm about the speed.
I wrote a small demo that can load 3d-studio-max ascii-exports. Now I tried a teapot :) it works, but the FPS is about 9...
the teapot has about 1000 triangles. Is this speed about correct? Or am I doing something really wrong/slow in my own code?
And what is the best way to speed up things? what functions are slow, which are not? What is the biggest bottleneck?
 
STTrife posted on Feb 1 2007 at 11:33 PM said:
But is it also possible to test this opengl implementation on windows? ofcourse I can just use the standard openGL to test my game, but I'll probably come to a point where it will still work in normal openGL, but fails on the gpu940 version.

So if it's possible, do I also need to run a seperate program in windows? and if so, could you post a compiled version of that so I don't need to compile it myself? thanks!

I don't know if it's possible to test on window. SDL and mmap should be the only requirements (that and renaming the mmaped file, which is "/tmp/foobar").

I do not own nor use windows, so I can't provide the windows binary.

Sorry.

About the speed issue : compare with the gpugears sample programm, which is composed of around 1000 triangles.
I had to change those triangles to quads in order to jump from around 10 fps to 20 fps (crude numbers).

You can put the gpu in debug mode to know what its doing, but I don't think you do something wrong.
(see something about "gpuCmdDbg" in this egoboo2x file :
http://cvs.gna.org/cvsweb/egoboo2x/char.c?...;cvsroot=gpu940
)

To achieve better performances :

- reduce polygon count by using quads or polygons instead of triangles whenever possible ;
- reduce polygon count by doing proper occlusion tests ;
- reduce polygon count by using backface culling ;
- reduce polygon count further by using simpler meshes ;
- reduce polygon count by any other mean ;
- do not use z-buffer when not necessary ;
- do not use z-buffer when simpler painter algorithm works ;
- even in the few cases when z-buffer is necessary, consider not using it ;
- if you still want a per-pixel depth test, ask yourself weither writing z is usefull (if not, mask it) ;
- of course : avoid blending or fancy rendering technics ;

In a word : optimize

gp2x teapots are not optimized in hardware ;-p
 
Last edited by a moderator:
Can they make tea though?

Ok, I think I'll give up on trying to use GPU940 on windows, and actually code for it once I get my GP2X back...

Instead I'll just use vanilla OGL... in that case what should I limit myself to?
PS: If someone can get the GPU940 working for windows machine and can describe the setup to an idiot (i.e. me) then I'd like to hear about it :)
 
Thanks for the ideas! Are any of these options automatically provided by openGL (like backface culling?).
 
PokeParadox posted on Feb 2 2007 at 05:47 PM said:
Instead I'll just use vanilla OGL... in that case what should I limit myself to?

If you only use OpenGL-ES subset with fixed points arithmetic you will be able to "port" ir to gpu940 with very minor changes.
You can use glBegin/glEnd functions, though. Although they are not in OpenGl-ES, they are implemented in gpu940'GL and works all right.

Apart from that, the preceding advices should be taken seriously if you want usable speed on the gp2x. You have to think about speed like in the old days of 3d rendering. For example, egoboo work in acceptable speed because I optimized the use of the zbuffer, removing useless glClears(), etc...

STTrife posted on Feb 2 2007 at 09:44 PM said:
Thanks for the ideas! Are any of these options automatically provided by openGL (like backface culling?).

Of course they are.
backface culling is controled by various parameters (basically, glEnable(GL_CULL_FACE), glFrontFace(GL_CW / GL_CCW)),
while zbuffer writes can be disabled (while keeping depth tests) with glDepthMask(GL_TRUE), etc...

This is generally not a bad idea to try to optimize GL apps even when they are targeted to modern graphic hardware (so that you can draw more teapots than the average).
 
Last edited by a moderator:
ok great (sorry I'm pretty new to openGL)
can you also comment on the zbuffer ? why is it slow, and how can you render without it?

Also would it be possible to do antialiasing by rendering it at 640x480, and resizing it to 320x240? or is that too much to ask from the GP2X... and it is supported by the gpu940, and would it also support the scaling down?

(p.s. I got 100 more questions but I'll try to find out as much as I can myself :) )
 
At 640x480, you have quadrupled the number of pixels that the renderer has to draw. This takes a lot longer than 320x240.

Resizing 640x480 down to 320x240 using a 2x2 average to get anti-aliasing will not only use a sizable chunk of memory, but it is also going to slow your application down so much you won't be counting frames per second. You will be counting seconds per frame.

Render to 320x240 and forget anti-aliasing. Leave that to the dedicated 3D hardware that the GP2X does not have.
 
Well the antialiased scaling is possible in hardware (according to the mmsp2 docs), but the 4x larger rendering size would hurt you.
 
I see, and are there other kinds of smart/fast ways to do some kind of antialiasing (or something similar?). I don't mean antialiasing the whole scene by scaling up and down, but just... like blurring the pixels that are not fully filled by one triangle (in other words on the 'edge' of triangles) ???

Also I noticed the small discussion on the mailing list about A. try to conform to openGL specs, or B. deviate somewhat to create a fast GP2X dedicated 3dLib... I'd say go for B.: Porting openGL games might seem nice, but I think the amount of games that would run fast enough on the GP2X (and worth the efford) is REALLY REALLY small. I wouldn't care if we only had ONE render/lighting option, but that it would look decent and would be able to render a good amount of triangles with good FPS. I wouldn't care about good 'openGL-conform' lighting if it would mean you can only render 50 triangles fast enough....


What would be great for for now, for homebrewers like me who only know the basics of 3d and openGL is an overview of what you ported in openGL and how fast/reliable it works. You have a good doc about how the gpu940 is designed etc. but that's not really something I need to know (and besides some parts are way to technical for me ;)
So what I would really like to know:

what functions are implemented and working from openGL?
What functions are slow and should be avoided if possible (and what alternatives should I use, if there are any?)
What functions are good to use to optizime stuff (like the backface culling etc.)?

That would REALLY help me a lot!


P.S. would using trangle fanstrips/fans speed it up much?

P.S.2. isn't glRotatef supposed to take the angle in degrees? Seems to me like I have to use radians in your version, why is that?
 
STTrife posted on Feb 3 2007 at 12:02 PM said:
I see, and are there other kinds of smart/fast ways to do some kind of antialiasing (or something similar?). I don't mean antialiasing the whole scene by scaling up and down, but just... like blurring the pixels that are not fully filled by one triangle (in other words on the 'edge' of triangles) ???

Yes, this is very possible to add edge antialiasing. The current gpu940 rendering is very poor in this area, both in quality and performance, and will probably be reworked soon. To soften the picture inside the polygons is less necessary : you can allways use bigger textures.

what functions are implemented and working from openGL?
What functions are slow and should be avoided if possible (and what alternatives should I use, if there are any?)
What functions are good to use to optizime stuff (like the backface culling etc.)?

That would REALLY help me a lot!

There is a section at the end of the doc that's supposed to answer this (not in CVS yet).
Im going to add things to it. Hopefully you will find answer to your questions there. If not, keep asking so that the doc can inprove.

P.S. would using trangle fanstrips/fans speed it up much?

Compared to mere triangles, yes, because the GL lib will give hint to the GPU that some vertexes are the same than previously used ones, and need not be projected again.
But if it's to render bigger polygons, don't cut them down to triangles, use GL_QUADS or GL_POLYGON instead (GL_POLYGON is not implemented yet because I never needed it, but it would be easy).

P.S.2. isn't glRotatef supposed to take the angle in degrees? Seems to me like I have to use radians in your version, why is that?

Because you know, bugs happen :)
Will be fixed soon. Thank you.
 
Last edited by a moderator:
the normal behaviour in openGL is for angles to be handled in radians... I don't know if this is different in OpenGL ES... It would probably be more efficient to handle angles in degress natively by the library since then no conversion from radians is necesary.
 
PokeParadox posted on Feb 4 2007 at 11:21 AM said:
the normal behaviour in openGL is for angles to be handled in radians... I don't know if this is different in OpenGL ES... It would probably be more efficient to handle angles in degress natively by the library since then no conversion from radians is necesary.

??
No he is right, really. From GL specs 1.4 :

Code:
	 void Rotate{fd}( T O, T x, T y, T z );
 O gives an angle of rotation in degrees; the coordinates of a vector v are given by
 ...
 
Last edited by a moderator:
There is a section at the end of the doc that's supposed to answer this (not in CVS yet).
Im going to add things to it. Hopefully you will find answer to your questions there. If not, keep asking so that the doc can inprove.

Ok thanks in my version there was nothing about that yet!

Because you know, bugs happen :)
Will be fixed soon. Thank you.

OK didn't mean to offend you ;) just didn't think it would be a bug, because egoboo was ported already.
 
STTrife posted on Feb 4 2007 at 09:04 PM said:
[OK didn't mean to offend you ;) just didn't think it would be a bug, because egoboo was ported already.

Its fixed in CVS.

Egoboo does not use glRotate (I never use it myself).
 
Last edited by a moderator:
Back
Top