Opengl 4.1 Has Full Support For Es


torpor said:
I mean, its all well and good and everything, don't get me wrong .. but how realistic is it that we're going to get drivers/packages for Angstrom that will give us all this? Not really, and in the meantime, OGL ES 1.1 and 2.0 are there and can be used ..

we are not going to get drivers that support anything other than ES 1.1 and ES 2.0 for the pandora.

All this means is that desktop/laptop machines will be able to support ES 2.0, which only means that ES 2.0 applications made for the pandora may run on the computers with opengl 4.X capable cards.
 
Last edited by a moderator:
Nation.A.List said:
Do you think begging/pleading and/or offering a CraigIX bounty would help?

He seemed pretty set against releasing what he had.
 
Last edited by a moderator:
we are not going to get drivers that support anything other than ES 1.1 and ES 2.0 for the pandora.

We may also get a stabilized OpenGL/ES wrapper library which gives some limited support for things, but that is neither here nor there.

It is good that the ES stuff is coming to the desktop, for sure .. I wish to emphasize that my prior blatant ignorance of the ES-on-desktop thing was droll, and I shall thus flog off.
 
some of the most impressive uses of GL i've seen have been from within functional languages.

allow me to bring to your attention impromptu.

here's an audio-visual live performance in that environment.


ps: Maciek did a lot of work on the reverse engineering, down to documenting most of the asm emitted by the drivers. the work from there on, though, to a complete desktop GL implementation, would be huge. so as it is, his work would be mostly useful as an academic tool, an ES-augmentation/booster tool, perhaps even for a homebrew OpenCL, but don't dream of desktop GL on the pandora - ain't coming in your lifetime. and outside of speedy desktop ports, it has no place on the pandora (yes, i was one of the proponents of that, but i've changed my mind since then - the effort would be better spent elsewhere).
 
darkblu said:
outside of speedy desktop ports, it has no place on the pandora (yes, i was one of the proponents of that, but i've changed my mind since then - the effort would be better spent elsewhere).

Ive wondered about that, for exmaple isnt the Intel Poulsbo the same hardware family for the SGX? Its seems the SGX variants might be scaled versions of the same hardware design. So if the opengl driver imgtec uses for intel would have been possible to use for the smaller end parts in the OMAP to use, it just wasnt licensed for use,
 
Last edited by a moderator:
Pickle said:
Ive wondered about that, for exmaple isnt the Intel Poulsbo the same hardware family for the SGX?
it is.

Its seems the SGX variants might be scaled versions of the same hardware design. So if the opengl driver imgtec uses for intel would have been possible to use for the smaller end parts in the OMAP to use, it just wasnt licensed for use,
intel don't use imgtec drivers. AAMOF, intel hardly use any GL drivers under linux:

http://www.phoronix.com/scan.php?page=news_item&px=ODIxNA
http://www.phoronix.com/scan.php?page=news_item&px=ODQxOA

bottomline being, there's no such thing as a OEM-supplied GL stack (non-ES) for SGX in the wild, and imgtec are surely not opening up theirs.
 
Last edited by a moderator:
darkblu said:
Pickle said:
Ive wondered about that, for exmaple isnt the Intel Poulsbo the same hardware family for the SGX?
it is.

Its seems the SGX variants might be scaled versions of the same hardware design. So if the opengl driver imgtec uses for intel would have been possible to use for the smaller end parts in the OMAP to use, it just wasnt licensed for use,
intel don't use imgtec drivers. AAMOF, intel hardly use any GL drivers under linux:

http://www.phoronix...._item&px=ODIxNA
http://www.phoronix...._item&px=ODQxOA

bottomline being, there's no such thing as a OEM-supplied GL stack (non-ES) for SGX in the wild, and imgtec are surely not opening up theirs.

With the introduction of Intel's Poulsbo (GMA 500) chipset it marked a point at which Intel's Linux graphics support was no longer stellar, but as they had outsourced the graphics IP from Imagination Technologies, they could not provide an open-source driver stack like they do with their in-house IGPs. Not only was this Intel Poulsbo Linux driver closed-source, but the level of support was appalling and it was a bloody mess of a situation. The overall situation since has only become worse and even MeeGo (their own Linux OS) will be shipping without Intel's EMGD driver.

I read that as that not only did they outsource the hardware to imgtec that also used their driver, thus all the closed source issues.
 
Last edited by a moderator:
Pickle said:
I read that as that not only did they outsource the hardware to imgtec that also used their driver, thus all the closed source issues.
the closed-source issue (which is a result of licensing imgtec's IP) and the level of support are orthogonal matters (TI's sgx ES driver is just as closed source, but it works). intel used some of imgtec's code, but gma500's GL linux stack was largely 'in-house', i.e. tungsten's making. i guess the latter were underpaid, or were otherwise generally uninterested in the job, as it's a largely unusable stack (in contrast to the windows d3d stack). but that (releasing half-cooked linux drivers for their gpu hw) is not something new for intel, despite their current position at the helm of the DRM community. anyway, i would not look in intel's general direction for any help on that matter.
 
Last edited by a moderator:
lulzfish said:
This is great news, now I just have to buy a totally new computer and I can use fixed-point math in regular OpenGL!
>:/

I don't want to type out the whole "Lulzfish tries to OpenGL but fucks it all up Saga", but it goes sort of like this:

The Pandora has weak floating-point math from what I have heard
Luckily OpenGL ES supports fixed-point math
I don't even really like floating-point so I wrote a fixed-point class and made a platformer tech demo (not a game really) based on it.
I was totally about to make some shit happen in OpenGL, but

GLSL represents vertices as floating-point vectors.

I have no idea how to reconcile this. I don't want to write a bunch of fixed-point stuff and then suddenly have my shaders fuck it up.
I don't want to write a bunch of floating-point stuff that runs slowly on the Pandora.

I'm just tired of writing shit like fixed-point classes and matrix classes* that should already exist.

* Some video on YouTube said I should write my own matrix class for OpenGL ES, because GL matrices are fixed-point or something. I don't even know what's going on.

I have lost all faith in OpenGL. It reminds of X11 now. It's so fucked up, and I want somebody to bury it under something like Qt so I don't have to fuck with it anymore.

tl;dr: This doesn't help me, it just reminds me of agony.

@SomeGuy99: We can't write drivers because the hardware is undocumented because all the popular video chip guys like money and IP.
I think someone made a compatibility library that bridged between normal OpenGL and ES, but even that wouldn't solve the problems I had in the spoiler.

GLSL represents vertices as numbers with fractions, in the 3D hardware its not necessarily floating points, it could be fixed points. it does not matter.
the GLSL compiler converts your 1.5 value into, say, 0x00018000 (16.16) if the shader hardware works in fixed point.


as for fixed points for vertex data, you can simulate this on PC using a model matrix multiplied (divided) by the inverse of "1" in fixed point.
Code:
#ifdef GLES

glVertexPointer(3, GL_FIXED, 0, gl_es_vertex_data);
glDrawArrays(...);
#else

const float model_matrix_int_to_fixed[16] = { // make this global
   1/65536.0f, 0, 0, 0,
   0, 1/65536.0f, 0, 0,
   0, 0, 1/65536.0f, 0,
   0, 0, 0, 1,
};

glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glMultMatrixf( model_matrix_int_to_fixed )
glVertexPointer(3, GL_INT, 0, gl_es_vertex_data);
glDrawArrays(...);
glPopMatrix();

#endif
and you can also write a glLoadMatrixx for OpenGL in the same way.
Code:
typedef int GLfixed;
void glLoadMatrixx( const GLfixed *m)
{
static const float matmul[16] = {
  0x1.0p-16, 0x1.0p-16, 0x1.0p-16, 1,
  0x1.0p-16, 0x1.0p-16, 0x1.0p-16, 1,
  0x1.0p-16, 0x1.0p-16, 0x1.0p-16, 1,
  0x1.0p-16, 0x1.0p-16, 0x1.0p-16, 1,
};
GLfloat mf[16];
for(int i=0; i < 16; i++)
  mf[i] = m[i] * matmul[i];
glLoadMatrixf(mf);
}

hope this helps.

edit: added code tags
 
Last edited by a moderator:
But it would still be converting things to floating-point behind my back, possibly losing precision.

Maybe I should just keep my floating-point stack and hope that somewhere between the Pandora's CPU power and NEON, it will run fast enough. Then use small maps so I don't lose precision at the edges of the world.
 
sorry, i just spotted lulzfish's rant. let me address some points.

lulzfish said:
This is great news, now I just have to buy a totally new computer and I can use fixed-point math in regular OpenGL!
>:/
more like 'i'll just have to upgrade my GL drivers.'

The Pandora has weak floating-point math from what I have heard
Luckily OpenGL ES supports fixed-point math
I don't even really like floating-point so I wrote a fixed-point class and made a platformer tech demo (not a game really) based on it.
I was totally about to make some shit happen in OpenGL, but

GLSL represents vertices as floating-point vectors.
there's no such thing. the specs specify minimal precision and range properties for the different precision formats available in the language, but that's all (anything more than that is just recommendations). they could be internally all represented as 128-bit fx numbers and you'd neither know, nor should really care.

I have no idea how to reconcile this. I don't want to write a bunch of fixed-point stuff and then suddenly have my shaders fuck it up.
they won't.

I don't want to write a bunch of floating-point stuff that runs slowly on the Pandora.
fp is not automatically slow on pandora, it depens what you do with it. if you just want to compose a bunch of matrices and pass them down to ES for the heavy lifting - i don't see why not use fp. but if you're really doing a heavy job on some vertex attributes, then sure - stick with fx. GL 4.1 gives you the oppotunity to reuse your desktop code there.

I'm just tired of writing shit like fixed-point classes and matrix classes* that should already exist.
why not reuse an existing implementation? it's not like this stuff was invented yeasterday.

* Some video on YouTube said I should write my own matrix class for OpenGL ES, because GL matrices are fixed-point or something. I don't even know what's going on.
some video on youtube was wrong (or was wrongly interpreted) - you need your matrix math code because there is no such functionality in the ES2 API (the math, not the formats), but why should that be an issue? if you really want the API to cater such basic functionality to you, then stick with ES 1.x (it will also spare you the need to write shaders ; )

I have lost all faith in OpenGL. It reminds of X11 now. It's so fucked up, and I want somebody to bury it under something like Qt so I don't have to fuck with it anymore.
i don't understand your gloomy position. ES is a beautiful API, ES2 in particular, very clean and very to the point.
 
Last edited by a moderator:
Okay, thanks, I'll try to keep this all in mind once I get back to 3D stuff.
I probably shouldn't even be thinking of asking OpenGL to do something that gets near the limits of my GPU's number representation, but I'd like to think I can use all the numbers I have available without math breaking down quickly.
 
Back
Top