Xreal - Most Advanced Open Source Engine Based On Q3 Engine


darkblu said:
'ledow' said:
Or, seeing as it's Linux-based, wouldn't a 128Mb swapfile on an SD or something tell you if it's able to run in 128 real + 128 virtual RAM? It'd be dog-slow, but it'd only be a test to see if the optimisation is necessary at all.
a reckless hack that might work is to hex-edit the elf header, fixing the respective section's memsz field (i.e. the one specifying 170MB) to something allocatable - say, 50-80MB. if the app actually uses that bss area for its internal allocations, as i suspect, one could hope that high addresses in that area will be accessed sufficiently later in the app lifespan, so we could get some "free ride" till then. either that or the app will quickly crash with a SIGSEGV as some unsuspecting access runs out of bss and right into unallocated heap space ; ) but at least the app will have progressed past the elf loader stage *grin* Dude, edit the source code, not the friggin object code. You have the symbol name, how hard can it be to go find where it's declared?
 
Last edited by a moderator:
Depending on how that static data is actually used, it could pay off to just mmap(2) a file full of data. If it e.g. just needs reference to the data it can be mmap'ed in a way which will only drag in real memory on a page-by-page basis.
 
Last edited by a moderator:
Tor said:
Depending on how that static data is actually used, it could pay off to just mmap(2) a file full of data. If it e.g. just needs reference to the data it can be mmap'ed in a way which will only drag in real memory on a page-by-page basis.
It's in bss, not constant data.
 
Last edited by a moderator:
'Exophase' said:
Dude, edit the source code, not the friggin object code. You have the symbol name, how hard can it be to go find where it's declared?
funny that you mentioned it, but i realized i did have the complete sym dump in my mailbox - i only skimmed it over for the private headers the other day, but IIRC it should be an -x dump after all *goes to fetch it.. done*. and while i cannot fix the sources this weekend as i have nothing to build em on here, i can at least lookup the worst offenders and report them to Pickle.

CODE

01a3fb54 l O .bss 03d09000 facets <-- ~64MB
091d5f18 g O .bss 01314c14 tess <-- ~20MB
07fda9f4 g O .bss 01000000 s_rawsamples <-- 16MB
 
Last edited by a moderator:
'darkblu' said:
CODE

01a3fb54 l O .bss 03d09000 facets <-- ~64MB
091d5f18 g O .bss 01314c14 tess <-- ~20MB
07fda9f4 g O .bss 01000000 s_rawsamples <-- 16MB

Let's look at the first one:
qcommon/cm_trisoup.c
CODE

static cFacet_t facets[SHADER_MAX_TRIANGLES];


qcommon/qfiles.h
CODE

#define SHADER_MAX_VERTEXES 100000
#define SHADER_MAX_INDEXES (SHADER_MAX_VERTEXES * 6)
#define SHADER_MAX_TRIANGLES (SHADER_MAX_INDEXES / 3)



Edit: looking at tess gives this: CODE

typedef struct shaderCommands_s
{
vec4_t xyz[SHADER_MAX_VERTEXES];
vec4_t texCoords[SHADER_MAX_VERTEXES];
vec4_t lightCoords[SHADER_MAX_VERTEXES];
vec4_t tangents[SHADER_MAX_VERTEXES];
vec4_t binormals[SHADER_MAX_VERTEXES];
vec4_t normals[SHADER_MAX_VERTEXES];
vec4_t colors[SHADER_MAX_VERTEXES];
vec4_t paintColors[SHADER_MAX_VERTEXES]; // for advanced terrain blending
vec4_t lightDirections[SHADER_MAX_VERTEXES];
vec4_t boneIndexes[SHADER_MAX_VERTEXES];
vec4_t boneWeights[SHADER_MAX_VERTEXES];

glIndex_t indexes[SHADER_MAX_INDEXES];

VBO_t *vbo;
IBO_t *ibo;

stageVars_t svars;

shader_t *surfaceShader;
shader_t *lightShader;

qboolean skipTangentSpaces;
qboolean shadowVolume;
int lightmapNum;

int numIndexes;
int numVertexes;

qboolean vboVertexSkinning;
matrix_t boneMatrices[MAX_BONES];

// info extracted from current shader or backend mode
void (*stageIteratorFunc) ();
void (*stageIteratorFunc2) ();

int numSurfaceStages;
shaderStage_t **surfaceStages;
} shaderCommands_t;

extern shaderCommands_t tess;




Seems SHADER_MAX_VERTEXES is a good place to start


Edit #2 yeah baby! set it to 10000 and set com_HunkMegs to the min of 96 and its getting in the app! Its failing on trying to open opengl. This might actually work!
 
Last edited by a moderator:
Pickle said:
'darkblu' said:
CODE

01a3fb54 l O .bss 03d09000 facets <-- ~64MB
091d5f18 g O .bss 01314c14 tess <-- ~20MB
07fda9f4 g O .bss 01000000 s_rawsamples <-- 16MB

Let's look at the first one:
qcommon/cm_trisoup.c
CODE

static cFacet_t facets[SHADER_MAX_TRIANGLES];
qcommon/qfiles.h
CODE

#define SHADER_MAX_VERTEXES 100000
#define SHADER_MAX_INDEXES (SHADER_MAX_VERTEXES * 6)
#define SHADER_MAX_TRIANGLES (SHADER_MAX_INDEXES / 3)



Edit: looking at tess gives this: CODE

typedef struct shaderCommands_s
{
vec4_t xyz[SHADER_MAX_VERTEXES];
vec4_t texCoords[SHADER_MAX_VERTEXES];
vec4_t lightCoords[SHADER_MAX_VERTEXES];
vec4_t tangents[SHADER_MAX_VERTEXES];
vec4_t binormals[SHADER_MAX_VERTEXES];
vec4_t normals[SHADER_MAX_VERTEXES];
vec4_t colors[SHADER_MAX_VERTEXES];
vec4_t paintColors[SHADER_MAX_VERTEXES]; // for advanced terrain blending
vec4_t lightDirections[SHADER_MAX_VERTEXES];
vec4_t boneIndexes[SHADER_MAX_VERTEXES];
vec4_t boneWeights[SHADER_MAX_VERTEXES];

glIndex_t indexes[SHADER_MAX_INDEXES];

VBO_t *vbo;
IBO_t *ibo;

stageVars_t svars;

shader_t *surfaceShader;
shader_t *lightShader;

qboolean skipTangentSpaces;
qboolean shadowVolume;
int lightmapNum;

int numIndexes;
int numVertexes;

qboolean vboVertexSkinning;
matrix_t boneMatrices[MAX_BONES];

// info extracted from current shader or backend mode
void (*stageIteratorFunc) ();
void (*stageIteratorFunc2) ();

int numSurfaceStages;
shaderStage_t **surfaceStages;
} shaderCommands_t;

extern shaderCommands_t tess;




Seems SHADER_MAX_VERTEXES is a good place to start


Edit #2 yeah baby! set it to 10000 and set com_HunkMegs to the min of 96 and its getting in the app! Its failing on trying to open opengl. This might actually work!
Pickle just got out of the pickle!

I'll get my coat.
 
Last edited by a moderator:
damn really want to try this out but me graphics card wont bloody work
grrrrrrrrrrrrrrrrrrrrrrr
 
'Pickle' said:
Getting stuck on the GL ARB ext, which appears the SGX doesnt support.
which one is that?
 
Last edited by a moderator:
darkblu said:
'Pickle' said:
Getting stuck on the GL ARB ext, which appears the SGX doesnt support.
which one is that?

Pretty much all of them, they are in the function: GLimp_InitExtensions
CODE

http://xreal.svn.sourceforge.net/viewvc/xreal/trunk/xreal/code/sys/sdl_glimp.c?revision=3262&view=markup
 
Last edited by a moderator:
I guess the claim of 'clean code' was slightly overrated. Have you tried contacting the author?
 
Last edited by a moderator:
sindbad said:
I guess the claim of 'clean code' was slightly overrated. Have you tried contacting the author?

Ive been in contact with the main author and he has been helpful, although I havnt discussed this problem yet
 
Last edited by a moderator:
ok, could be slightly more complicated that i assumed. a great deal of those extensions are a given in core es2, so it is not the presence check that is bothering me, but the fact that the API calls for those extensions all get routed through client dispatchers. i guess your best bet right now is:

* mark all those things naturally found in core es2 as present in their respective glConfig.* fields, as some resource management logic possibly makes decisions based on them.

* mark all the rest as absent (possibly some of them are present as OES extensions in the SGX driver edge, but for now we dont care) - some of them should never be needed (e.g. the ARB vertex- and fragment programs which have been superseded by GLSL), and some will have very low chances to be used anyway. let us call all those absent APIs

* #define aliases on top of the qgl* dispatch entries for those API ext calls that map to core es2. let us call those present APIs.

* try to build and see where the qgl API "signatures" (i.e. the way those qgl calls are invoked) for present APIs differ from their actual es2 signatures - fix them piece-wise.

somewhere about then you should have a functional "core-level" qgl framework. possibly : )
 
Great job so far guys. If you can get this running, it will definitely be Pandora's most (graphically) impressive launch title. I'm not a big fan of FPS games, but a lot of people are, and this could be quite popular.

Good luck. :)
 
Last edited by a moderator:
From my knowledge of XreaL it shifts everything into a Vertex Buffer Object. This includes all geometry as well as textures etc are loaded onto the graphics card in a PC.

This is probably why it wants to load 170mb in static variables.

As I understand the renderer as well, Tr3b (the main author of XreaL) tries to allocate the most mem first, then if it fails, it keeps falling back until it can allocate the graphics memory.

Maybe therefore we need to hack the code to tell it to try something smaller first.

Also, if you get this running on Pandora then you basically have a GPL'd Q3 engine with modern shaders (ie the Q4 engine) that can be used for RTS and other types of games. ( anyone remember Quake Rally?)
 
Last edited by a moderator:
'Dingo_aus' said:
From my knowledge of XreaL it shifts everything into a Vertex Buffer Object. This includes all geometry as well as textures etc are loaded onto the graphics card in a PC.

This is probably why it wants to load 170mb in static variables.

As I understand the renderer as well, Tr3b (the main author of XreaL) tries to allocate the most mem first, then if it fails, it keeps falling back until it can allocate the graphics memory.
The memory problem is solved, at least it should run fine on a 256 mb board. I had to enable the swap to run in X in addition to reducing that vertexes count

darkblu: I will give your idea a try when I can get to it.
 
Last edited by a moderator:
If the Pandora can run this with a good framerate with comparable quality to the screens in the first post then I am sold. I will be buying a Pandora at that point. The emulation already has me wanting one, the computer use of the Pandora has me wanting it too., but if it can run an engine like that than I will need a Pandora....not just want one. I will have to have one. I might even consider selling my PSP to help buy it or at least pay for the SD cards I will need.
 
I guess this thing is a bit overhyped. It shouldn't be a problem graphical wise, why is everybody so horny?
It might be a problem if its a pure port but if an engine is made for the Pandora we can get stuff which looks by far better than this..
 
Last edited by a moderator:
JayFoxRox said:
I guess this thing is a bit overhyped. It shouldn't be a problem graphical wise, why is everybody so horny?
Pixel shaders make me horny. Especially the part that does HDR. HDR is the kind of thing that's easy to fake, but it almost always looks wrong. I've always wanted to try a real implementation.


Also, this engine looks like the current record for "look how badass the Pandora will be"
And people seem to like the idea of showing off the SGX to friends or potential customers or whatever those guys are called.
 
Last edited by a moderator:
'lulzfish' said:
'JayFoxRox' said:
I guess this thing is a bit overhyped. It shouldn't be a problem graphical wise, why is everybody so horny?


Pixel shaders make me horny. Especially the part that does HDR. HDR is the kind of thing that's easy to fake, but it almost always looks wrong. I've always wanted to try a real implementation.


Also, this engine looks like the current record for "look how badass the Pandora will be"
And people seem to like the idea of showing off the SGX to friends or potential customers or whatever those guys are called.
You can already show quake3 to your friends, which run very great :)
 
Last edited by a moderator:
Back
Top