GP2X 3d File Format For Vincent Suggest?


Dingo_aus

Member
Joined
Jan 4, 2006
Messages
133
Age
41
Location
Brisbane, Australia
Website
www.users.on.net
Ok, looking for opinions on a file format for storing 3d data in.

Firstly does a file format exist that is OGL ES friendly and readily available?


If not, what qualities are best for the GP2X?

I assume the ideal for the GP2X would be fixed-point but what would be the best size for speed and accuracy, 32 bit?

Are there any special considerations with regard to array sizes in terms of memory access speed and memory bus?

What exactly do we need to store? I'm assuming the bare minimum is best in terms of loading and handling speed. Just 3 vertices and 2 UV coords per poly? Should normals be stored or calculated on the fly?

Just trying to pick people's brains here, thanks.
 
OpenGL-ES uses fixed point natively, so fixed point would be the ideal format for floating point values.

The final file format really depends on what you need as an application developer. It could be different for each application. Some applications might not use texturing, so UV coordinates would be wasted. Normals could be stored in the file, or they could be generated.

As a general guide, each polygon would have three vertices. Each vertex would have a position (x, y, z), UV coordinates (u, v) and a colour (r, g, B). Or perhaps you don't want a colour vertex because you will be putting the prelit colouring in the texture. You could have a normal per vertex, so add (nx, ny, nz) to each vertex. Or you could have a normal per polygon, so add (nx, ny, nz) to the polygon. Or maybe you aren't using real-time lighting, so forget the normals all together.

Now, do you want to use an indexed list of vertices? If so, then each polygon consists of three indices into the array of vertices. Each subobject could contain a list of indices for the polygons contained in that subobject. Each model could contain a list of indices of subobjects contained within that model. Are you even going to have subobjects within a model?

Animation. Keyframed? Bones? How many bones can a vertex be weighted to? Are you going to use Euler angles or quaternions for rotations?

For quick memory access, you should try to keep all related data close together to minimize cache misses. Indexed buffers might promote a lot of cache misses, depending on how the data is ordered within the buffers.

I think there are just too many variables for one file format to fit all. Any developer worth their salt will use a file format that suits their purposes exactly. Other developers will use someone else's file format that may or may not be a close fit and not get optimal results.
 
Dingo_aus posted on Mar 22 2006 at 04:10 AM said:
Ok, looking for opinions on a file format for storing 3d data in.
Firstly does a file format exist that is OGL ES friendly and readily available?
If not, what qualities are best for the GP2X?
TRI-STRIPS!!!

What I meant to say: One of the most important performance factors is to reduce any unnecessary computation, and tri-strips help cutting geometry (lighting) processing a lot. So try to avoid a structure that uses triangles as faces with 3 indices into the vertex coordinates, which is what I have seen a lot (and the car demo is really a bad example, but that's what Deep Exploration generates by default).

Animation: OpenGL ES has the matrix palette extension for doing that (which I removed for now, nobody was using it, but it's fairly easy to add back in). In that case you need weight arrays and index arrays aligned with your tri-strips. A good way to go is probably to use a structure similar to what slygamer proposed as intermediate format in your toolchain, and then converting strips into the actual vertex buffers/vertex arrays.

Textures: Vincent can handle multiple textures (there is a constant in OGLES.h), and it has most of the texture combine options. If you want to use mutliple textures (e.g. light maps or bump maps) your structure is probably going to need to support more than 1 texture coordinate as well.

If you want to save space, you can use bytes or shorts for colors and uvs; in case of uvs you will need to specify suitable texture transformation matrices to convert to Q16.16.

M3G gives you a good idea what the ideal structure should look like. You end up with something very generic, such as
Code:
  class Appearance;
  class IndexBuffer;
  class VertexBuffer;

  DECLARE_CLASS(Mesh, Node)
  	DEFINE_META_DATA
  	DECLARE_READ_WRITE

  public:
  	Mesh();
  	~Mesh(void);

  protected:
  	Mesh(const mimp::core::Class & clazz);

  public:
  	void Init(VertexBuffer * vertices, size_t submeshCount, IndexBuffer ** submeshes, Appearance ** appearances);
  	void Init(VertexBuffer * vertices, IndexBuffer * submesh, Appearance * appearance);

  	mimp::core::UInt32 GetSubmeshCount() const;
  	VertexBuffer * GetVertexBuffer() const;

  	Appearance * GetAppearance(mimp::core::UInt32 index) const;
  	IndexBuffer * GetIndexBuffer(mimp::core::UInt32 index) const;
  	void SetAppearance(mimp::core::UInt32 index, Appearance * apperance);

  	bool Render(Graphics3D * context, mimp::core::Matrix & matrix);
  	void Traverse(Visitor & visitor);

  private:
  	void ReleaseData();

  private:
  	// persistent state
  	VertexBuffer *    m_vertexBuffer;
  	mimp::core::UInt32  	m_submeshCount;

  	union {
    IndexBuffer **  	m_submeshes;
    IndexBuffer *  	m_submesh;
  	};

  	union {
    Appearance **  	m_appearances;
    Appearance *  	m_appearance;
  	};

  	// active state
  };

  inline mimp::core::UInt32 Mesh::GetSubmeshCount() const {
  	return m_submeshCount;
  }

  inline VertexBuffer * Mesh::GetVertexBuffer() const {
  	return m_vertexBuffer;
  }

	}

and

Code:
  DECLARE_CLASS(IndexBuffer, Object3D)
  	DEFINE_META_DATA
  	DECLARE_READ_WRITE

  public:
  	enum PrimitiveType 
  	{
    POINTS,
    LINES,
    LINE_STRIP,
    LINE_LOOP,
    TRIANGLES,
    TRIANGLE_STRIP,
    TRIANGLE_FAN
  	};

  public:
  	IndexBuffer();
  	~IndexBuffer(void);

  	void Init(PrimitiveType type, mimp::core::UInt16 strips, const mimp::core::UInt16 * indices, const mimp::core::UInt16 * stripLengths);
  	void Init(PrimitiveType type, mimp::core::UInt16 firstIndex, mimp::core::UInt16 strips, const mimp::core::UInt16 * stripLengths);

  public:
  	PrimitiveType GetType() const;
  	mimp::core::UInt32 NumIndicies() const;
  	void RetrieveIndices(size_t start, size_t numIndices, void * result);

  	mimp::core::UInt16 GetNumStrips() const;
  	mimp::core::UInt16 GetFirstIndex() const;
  	mimp::core::UInt16 * GetIndices() const;
  	mimp::core::UInt16 * GetStripLengths() const;

  private:
  	void DeleteArrays();

  private:
  	// persistent state
  	mimp::core::UInt16 *	m_indices;
  	mimp::core::UInt16 *	m_stripLengths;
  	mimp::core::UInt16  m_strips;
  	mimp::core::UInt16  m_firstIndex;
  	mimp::core::Byte  m_type;

  	// active state
  };

Some people will argue that you will still want to use the exact geometry for collisions etc. but I find it way for effective to also pre-calculate your collision geometry (spheres, polyhedra, whatever) based on the intermediate structure, and then have your game objects have references to both the render geometry as well as collision geometry.

The file format would be more or less a binary version of that... similar to what M3G defines in the appendix or wht DX does.

- HM
 
Last edited by a moderator:
Is the car demo for vincent bottlenecked by geometry or by pixel filling?

The reason I ask: if somebody is planning to use vincent to develop a game, the frame rate of the demo is about the rate one would target for such a game. If the pixel fill stuff is the bottleneck then that means the game should only use vincent to draw stuff filling maybe 1/6 of the screen at most.

There's still plenty that could be done with that restriction -- space fighter games, maybe a racing game if the background can be drawn another way, etc -- but a first person shooter like doom is probably not feasible.

Thanks!
 
Dzz posted on Mar 22 2006 at 07:22 AM said:
Is the car demo for vincent bottlenecked by geometry or by pixel filling?
Per frame (55-58ms), 14-16 ms is spent on pixel filling, the rest is geometry, triangle setup and plus buffer swap.

For the pixel filling part: That's why I included C-source equivalents of the inner loops, so that somebody else can take a look at it. There is one design choice which I have not had the chance to experiment with: Should the per block pixel mask between depth pass and color pass save a bitmask to a 64-bit result register instead of a byte array? In those loops, register pressure is one of the main problems.

You could save on the setup cost if you would drop the requirement to be perspective correct.

For geometry: There is quite some easy stuff to do, which I m currently looking at.

Other than that: Changing the surface formats if you need alpha or stencil (e.g. having depth/stencil in a single buffer and RGBA in a single buffer)

- HM

PS: Michael Abbrash writes in his Black Book that Quake had a target of 500 tris max per frame; that's probably a good target for the gp2x as well.
 
Last edited by a moderator:
Back
Top