Help on port of Fade2BlackGL


just splitted the int variables but nothing changed..


I just compiled (win32) using the code of Mjohansson (the code commented in the function ) and with it the sprites are partially visible (but distorted )...
 
Since your using c++, i changed to vectors so it can keep track of the element count. Use this code and put #include <vector> at the top and it works.



Code:
	std::vector<GLfloat> vtx;

	std::vector<GLfloat> nrm;


	glEnableClientState(GL_VERTEX_ARRAY);

	if (_normals) {

    	glEnableClientState(GL_NORMAL_ARRAY);

	}


	while (verticesCount--)

	{

    	vtx.push_back( vertices->x ); vtx.push_back( vertices->y ); vtx.push_back( vertices->z );

    	if (_normals) {

        	Vertex4f n;

        	n.x = vertices->nx;    	n.y = vertices->ny;    	n.z = vertices->nz;

        	n.normalize();

        	nrm.push_back( n.x ); nrm.push_back( n.y ); nrm.push_back( n.z );

    	}

    	++vertices;

	}


	glVertexPointer(3, GL_FLOAT, 0, &vtx.at(0));

	if (_normals) {

    	glNormalPointer(GL_FLOAT, 0, &nrm.at(0));

	}


	glDrawArrays(GL_TRIANGLE_FAN,0,vtx.size()/3);

	glDisableClientState(GL_VERTEX_ARRAY);


	if (_normals) {

    	glDisableClientState(GL_NORMAL_ARRAY);

	}
 
Last edited by a moderator:
In void Render::drawParticle(const Vertex *pos, int color) {


Where glDrawArrays(GL_POINTS,0, 2); shouldnt the 2 be a 1, there is only one point?


I took a look at everything else and it appears ok.


Edit: One other odd thing i noticed:



Code:
struct Vertex {

    int x, y, z;

    int nx, ny, nz;

};



Yet they treat the ints as floats in the polygon render. So I found switching the vertices coordinates to short works just as well.

i.e.

Code:
std::vector<int16_t> vtx;glVertexPointer(3, GL_SHORT, 0, &vtx.at(0));
But there is some risk with this if there is case where the int value excedes the short range, but this isnt likely.The benefit would be reducing the information size that is needed to transfer to the gpu (takes less time).





Edit 2: I have a further optimzation that seems to work fine. Since the normals arnt not used the vertex information could be read direct from the vertices array. Although theres a problem GL_INT isnt supported on GLES. GL_SHORT is so switching the vertex struct to shorts and the call to glVertexPointer it works fine. This saves all that time going through each vertex to copy it another array, plus it makes the data into shorts which save memory bandwidth.



Also it finally dawned on me why the original code didnt work, verticesCount was 0 in the end since it was decremented in the while statement. So when it came to the DrawArrays it was doing 0 elements.





Code:
struct Vertex {

    int16_t x, y, z;

    int16_t nx, ny, nz;

};





Code:
	glEnableClientState(GL_VERTEX_ARRAY);

	//if (_normals) {

    	//glEnableClientState(GL_NORMAL_ARRAY);

	//}


/*

	while (verticesCount--)

	{

    	vtx.push_back( vertices->x ); vtx.push_back( vertices->y ); vtx.push_back( vertices->z );

    	if (_normals) {

        	Vertex4f n;

        	n.x = vertices->nx;    	n.y = vertices->ny;    	n.z = vertices->nz;

        	n.normalize();

        	nrm.push_back( n.x ); nrm.push_back( n.y ); nrm.push_back( n.z );

    	}

    	++vertices;

	}

*/

	glVertexPointer(3, GL_SHORT, sizeof(int16_t)*6, vertices);

	//if (_normals) {

    	//glNormalPointer(GL_SHORT, 0, &nrm.at(0));

	//}


	glDrawArrays(GL_TRIANGLE_FAN,0,verticesCount);

	glDisableClientState(GL_VERTEX_ARRAY);


	//if (_normals) {

    	//glDisableClientState(GL_NORMAL_ARRAY);

	//}



This change also allows you to change all of the vertex arrays to shorts in drawPolygonTexture, drawParticle like:





Code:
    	int16_t vtx7[] = {

    	vertices[0].x, vertices[0].y, vertices[0].z,

    	vertices[1].x, vertices[1].y, vertices[1].z,

    	vertices[2].x, vertices[2].y, vertices[2].z,

    	vertices[3].x, vertices[3].y, vertices[3].z

    	};




...


    	glVertexPointer(3, GL_SHORT, 0, vtx7);
 
Last edited by a moderator:
:eek: This is the fastest support that one can ask... :)


Now i go to submit all the change you suggested...thanks
 
I have prob running on Pandora...


I have changed :


#include <vector> and added std::vector<int16_t> vtx;


excluded the NORMAL parts as suggested;


glDrawArrays(GL_POINTS,0, 2); --> glDrawArrays(GL_POINTS,0, 1);


the Vertex struct to int16_t


the vtx arrays to int16_t


and to GL_SHORT all glVertexPointer(3, GL_SHORT, 0, vtx7);


On windows it runs good on Pandora show the 3 slides at start and then remain stucked at this on the terminal it says "stub.state 1"


I use Hotfix 5...(don't know if its important)
 
Last edited by a moderator:
Yeah the pandora (pvr) isnt kind when trying to develop opengles. You could try like you did on the windows and remove opengl parts until the lockup doesnt occur.
 
Today i have found some time to spend on this game...


i have enabled the function with normals and i have try to build without optimization...and magically never have a crash...ok the game run slow but no crash.


So before i used to compile it with:

-mcpu=cortex-a8


-march=armv7-a


-mfpu=neon


-ftree-vectorize


-funsafe-math-optimizations


-mthumb


-ffast-math

now i want to experiment without normals.


Now what are the best options for this game ?
 
Maybe try to deactivate them one by one and try to find the crashy option. The cpu and architecture identifiers should't be a problem.
 
Maybe try to deactivate them one by one and try to find the crashy option. The cpu and architecture identifiers should't be a problem.

Yes i tested all the options and the -mthumb is causing the crash....i don't remember why i added to the project.


If i want to have a working game i have to use this function by Pickle:



Code:
std::vector<GLfloat> vtx;

    	std::vector<GLfloat> nrm;


    	glEnableClientState(GL_VERTEX_ARRAY);

    	if (_normals) {

    	glEnableClientState(GL_NORMAL_ARRAY);

    	}


    	while (verticesCount--)

    	{

    	vtx.push_back( vertices->x ); vtx.push_back( vertices->y ); vtx.push_back( vertices->z );

    	if (_normals) {

            	Vertex4f n;

            	n.x = vertices->nx; 	n.y = vertices->ny; 	n.z = vertices->nz;

            	n.normalize();

            	nrm.push_back( n.x ); nrm.push_back( n.y ); nrm.push_back( n.z );

    	}

    	++vertices;

    	}


    	glVertexPointer(3, GL_FLOAT, 0, &vtx.at(0));

    	if (_normals) {

    	glNormalPointer(GL_FLOAT, 0, &nrm.at(0));

    	}


    	glDrawArrays(GL_TRIANGLE_FAN,0,vtx.size()/3);

    	glDisableClientState(GL_VERTEX_ARRAY);


    	if (_normals) {

    	glDisableClientState(GL_NORMAL_ARRAY);

    	}

using the version without normals arrays freeze the game at start with or without optimization.
 
Last edited by a moderator:
I don't know if we even have thumb aupport in the kernel, I read somewhere, that it needs to be compiled into the system somehow.


But nice Progress! Hope you get playable results.
 
_normals is false, so i expect you arnt using them anyway.


If you having issues with performance i also suggest you look at my last statements, you can directly pass the vertex array's to opengl without needing to use a vector or array copy before hand. It may or may not help, but its less cpu work at least. I also tested it on the pc version so it will work.
 
_normals is false, so i expect you arnt using them anyway.
Yes i know it...but if using the normal version (with the Vertex as int16_t and changed all the arrays to int16_t and use GL_SHORT on vertex drawing) the game work...


if i use the chunk of code without the normals part, the game freezes at start when i push the button to skip the bitmap (or freeze if i wait after the 3 bitmap at start)...don't know why.


So i experimented the function using GLfloat like this :



Code:
    	//std::vector<int16_t> vtx;

    	std::vector<GLfloat> vtx;


    	glEnableClientState(GL_VERTEX_ARRAY);


    	//glVertexPointer(3, GL_SHORT, sizeof(int16_t)*6, vertices);

    	glVertexPointer(3, GL_FLOAT, sizeof(int16_t)*6, vertices);


    	glDrawArrays(GL_TRIANGLE_FAN,0,verticesCount);

    	glDisableClientState(GL_VERTEX_ARRAY);

now the game don't freeze but sprites are no more visible.


Maybe we need to change something in other functions when we are using GL_SHORT on vertexpointer


------------------------------------------------------------------------


I have gained a little speed ( i think...) commenting out this :

#ifndef PANDORA


SDL_Delay(kTickDuration);


#endif
kTickDuration is declared as : static const int kTickDuration = 30;
 
New version (0.18 ) by Mr Montoir have added a GLES convertion for Android... :) ...in the next few days i want to build the new sources...
 
So i want to update my build of this game and downloaded last sources from git https://github.com/cyxx/f2bgl as the author implemented some minor fix but didn't released new version.

I have also updated my project using the new eglport by Pickle (now using version R6).

So after some code implementations i build it with latest Codeblocks and when i run i have a fullscreen (so 800x480) app but running in 640x480 ...

QCbxbOm.png


this is how screen is initialized on SDL+GLES


#ifdef CAANOO
static const int kDefaultW = 320;
#else
static const int kDefaultW = 640;
#endif
static const int kDefaultH = kDefaultW * 3 / 4;

static int gWindowW = kDefaultW;
static int gWindowH = kDefaultH;

#ifdef CAANOO
static int gScale = 1;
#else
static int gScale = 2;
#endif

.....

......

#elif defined (PANDORA)
SDL_SetVideoMode(gWindowW, gWindowH, 0, SDL_SWSURFACE | SDL_FULLSCREEN); //SDL_RESIZABLE);
EGL_Open(gWindowW,gWindowH);
SDL_ShowCursor(SDL_DISABLE);

and yes i'm trying to adapt a version for Caanoo also...as you can read from the sources.



This is the log of the Pandora version running ..


EGLport: egl_mode= set to 0.
EGLport: use_vsync= set to 0.
EGLport: use_fsaa= set to 0.
EGLport: use_fps= set to 0.
EGLport: size_red= set to 5.
EGLport: size_green= set to 6.
EGLport: size_blue= set to 5.
EGLport: size_alpha= set to 0.
EGLport: size_depth= set to 16.
EGLport: size_buffer= set to 16.
EGLport: size_stencil= set to 0.
EGLport: Opening EGL display
EGLport: Using EGL_DEFAULT_DISPLAY
EGLport: Initializing
EGL Implementation Version: Major 1 Minor 4
EGL_VENDOR: Imagination Technologies
EGL_VERSION: 1.4 build 1.4.14.2616
EGL_EXTENSIONS: EGL_KHR_image EGL_KHR_image_base EGL_KHR_gl_texture_2D_image EGL_KHR_gl_texture_cubemap_image EGL_KHR_gl_renderbuffer_image EGL_KHR_vg_parent_image EGL_IMG_context_priority
EGLport: Found 2 available configs
EGLport: Using Config 0
EGLport: Binding API
EGLport: Creating Context
EGLport: Creating window surface
EGLport: Making Current
EGLport: Setting swap interval
EGLport: Complete
game.initLevelData 0
WARNING: Game::setupInventoryObjects() missing inventory object index 2!
stub.state 0
cutscene.load 47
cutscene.load 39
cutscene.load 13
cutscene.load 37
cutscene.load 53
cutscene.load 29
stub.state 1
stub.state 2
stub.state 1

Any idea what i have done wrong ?
 
Last edited by a moderator:
I am not sure I understand the question.

You start a 640x480 SDL screen with Fullscreen Flags, and you expected to have the 640x480 screen centered in the 800x480 space ?

If this is that, I confirm I also tried and it don't work. The only way to have a centred "fullscreen" is to start 800x480 and center the usefull screen in the code, which require more work, and could be quite annoying to make perfectly work.
 
Uh!  thanks

But in my previous version it's working...i could make to check if i left something from the code of the old version....but i don't think
 
Back
Top