Image To Trimesh


Rockthesmurf

Advanced Member
Does anyone know of an existing library/function/algorithm that will take a sprite consisting of RGBA channels, and generate two meshes, one of the regions that need alpha blending, and one of the opaque regions? I can think of a few ways to implement something myself, but I can also think of multiple reasons why the implementations would be less than optimal and over complicated. It feels like the kind of thing that should already exist but my Google skills haven't found anything overly helpful yet.

Steve
 
Rockthesmurf said:
Does anyone know of an existing library/function/algorithm that will take a sprite consisting of RGBA channels, and generate two meshes, one of the regions that need alpha blending, and one of the opaque regions? I can think of a few ways to implement something myself, but I can also think of multiple reasons why the implementations would be less than optimal and over complicated. It feels like the kind of thing that should already exist but my Google skills haven't found anything overly helpful yet.

Steve
If I understand the problem correctly, then I don't know how this would be even possible. And what would be the use of doing this?

(I assume that a mesh is a collection of vectors/faces with material information, essentially a trimesh with color info)

Consider the following scenario: (# = opaque region aka A=255, % = translucent region aka A=something)
Code:
#%#%#%#%#%#%#%#%
%#%#%#%#%#%#%#%#
#%#%#%#%#%#%#%#%
%#%#%#%#%#%#%#%#
#%#%#%#%#%#%#%#%
%#%#%#%#%#%#%#%#
#%#%#%#%#%#%#%#%
%#%#%#%#%#%#%#%#
How would you go about creating a mesh for a system like this? Do you use a primitive for each pixel (→ a triangle), or two (→ a square)?

And in a system like this:
Code:
################
###############%
##############%%
#############%%%
############%%%%
###########%%%%%
##########%%%%%%
#########%%%%%%%
########%%%%%%%%
How do you treat the edges? You must rely on aliasing to create a mesh that works as expected.

You probably have a different definition of "mesh" than I have. However, I can't think of what it could be. If you mean that a mesh == essentially a bitmap, then why would you need to separate transparent pixels from opaque ones? Also, this is quite easily done, so I don't know why you're asking in that case.
 
Last edited by a moderator:
i don't know a ready solution, but one should be devisable, depending on your mesh requirements, like:

how many vertices per sprite pixel are you seeking for?

what are the mesh topology type requirements: triangle list, strip, fan? indexed or non-indexed?

apparently, a minimal-effort solution would be a 4 verts/pixel tile-set built through a triangle list.
 
I assume that the reason for doing all this is that trimeshes are more easily scalable, and you want to reduce the GPU load by giving it a "vector image" instead of a texture.

However:
darkblu said:
apparently, a minimal-effort solution would be a 4 verts/pixel tile-set built through a triangle list.
...which would defeat the purpose of making a trimesh of the thing in the first place.

As I demonstrated before, and as darkblu seems to agree with me on, it is very difficult or even impossible to create an algorithm that can do anything beyond creating 2 primitives for each pixel, essentially creating a very big (file size, vertex count and memory usage) mesh that isn't usable for anything, really. To do anything beyond that, you would have to use sprite interpolation algorithms of some sort, and that wouldn't guarantee good results every time. You'd be better off manually creating vectorized versions of your sprites.

Once you have solved that problem, however, the separation of transparent vertices from opaque ones should be trivial. You could actually just take some existing tool that gives you a trimesh from the sprite you want to vectorize (and produces a trimesh containing both transparent and opaque primitives), and then make a small program that just parses that file and separates the primitives into different files. It shouldn't really be difficult at all.
 
Last edited by a moderator:
dflemstr, i agree the task can be quite non-trivial, depending on the sought parameters, but nevertheless various robust solutions may exist (the quad tile per pixel is not the only straight-forward one), as people have been addressing similar problems for quite some time now (ie. detail-driven triangulation of regular-shaped maps is a typical task in terrain mesh generation). the best approach here, as you noted already, is to ask the OP what's that mesh for ; )
 
Okay, a few more details, one of the biggest performance factors I currently have with my iPhone engine is the fact the iPhone's fill rate isn't too impressive. It is good at hidden surface removal (based on Z depth) because it has a deferred renderer so it only has to draw each non alpha blended pixel once. However, a lot of assets I use have alpha blending, mainly on the edges, but can be on other areas also. When alpha blending is enabled you can really only draw two full screen alpha images on the iPhone if you want to renderer at 60 FPS (which I do). A result of this is that all the layered background in my games and baked down to a single image, this obviously means no parallax effects, and means it isn't so easy to have things interacting with the background (for example a bird flying behind a mountain, and in front of the sky, as the sky+mountain are baked to a single image). I have started looking at how I'm going to get this engine running on the Pandora, and I don't want to be limited by these same factors, on the flip side I am not keen on having to write different game code for iPhone/Pandora, so one obvious part solution is to improve the performance of drawing sprites with alpha in on the iPhone. So what I'd like to do is to be able to take a sprite, for example think of a human character, and generate two meshes for opaque and non opaque regions. By mesh I mean one or more triangle strips. The opaque mesh I can draw with alpha blending off (fast), and the rest can be drawn with blending on (slow, but hopefully there wouldn't be too much to draw, as generally the alpha blending is often near the edge of the sprite).

I can easily enough take one of my assets, and manually draw a couple of low poly triangle strips around all the areas that need alpha blending, and can also do the same for the opaque regions, but the way I'd do it by hand isn't a programmatic way that I can implement in C++. The only idea I had for doing it in C++ would be to have a square (well two triangles) that represent every pixel in the image, and then start doing edge removal, e.g. if two squares are next to each other, and they are both fully opaque then their shared edge can be removed, the same goes for squares with alpha blending information. By the end of that process I'd end up with a bunch of adjoining squares forming regions of the image, that would then need to be made in to an efficient triangle strip. This is where it gets tricky, because I don't really want an exact mesh that fits around the alpha pixels on the edge of the character, instead I'd want something far coarser, the coarser mesh would need to make sure it contains all the alpha pixels for the region, but there's no reason why it can't have a bunch of fully opaque/fully transparent pixels in there too. Essentially it becomes a balancing game, clearly it isn't worth while drawing 1x1 squares (e.g. drawing each texel of the image separately) so there would need to be a way to set what the minimum size is for an opaque region, and probably a maximum triangulation of the entire image.

Hopefully that better explains what I am getting at, and highlights why I am hoping there is an existing solution I can use rather than rolling one myself, as it feels like something that is going to be pretty hard to get right, although even some really simple algorithms of just trying to find the large rectangle of opaque pixels in an image, using that as the opaque mesh and using the rest as the transparent mesh, would most likely yield reasonable performance gains for a lot of sprites in my case.

Steve
 
Visual example... here is my original image (scaled up for clarity, got some sharp edges there, quite possibly legitimate issues with art, but this is scaled to 1000% so normally it wouldn't be noticable):

From that I I can extract the opaque region:

And the translucent region:

Manually it's easy enough to draw a triangle mesh that fits the majority of the opaque region:

And finally draw a mesh that covers the translucent region (again, it doesn't matter how exact this is, the important thing is all alpha pixels must appear inside the green triangles, it doesn't matter if opaque/transparent stuff appears in here too):


Hopefully that makes it a little clearer as to what I am trying to achieve...?

Steve
 
I just wrote a longer reply but that magicly disappeared... so I'll try to keep it short:

techz.png


- First create a circle with the center (Black) in the opaque region
- Move the corners of the circle (Red) towards the center until they hit opaque regions
- Create triangles towards the center (Green)
- Check if the angle between two triangles is too small and remove those if necessary (Blue)

You can adjust the quality with this quite easily, it should be fast and easy to do.
You can also grab the alpha regions by moving the corners (of the finished triangles!) to the outside again until you reach a region which is fully transparent. Just create some ring like you did after that
 
JayFoxRox said:
- First create a circle with the center (Black) in the opaque region
- Move the corners of the circle (Red) towards the center until they hit opaque regions
- Create triangles towards the center (Green)
- Check if the angle between two triangles is too small and remove those if necessary (Blue)
This won't work for concave sprites (aka like my (exaggerated) first sprite in post #2; as I said, a fully working solution is difficult to find) but it should work for convex sprites.
 
Last edited by a moderator:
Yeh but from the requirements given in the OPs last post I thought that my model should work fine. You can also add support for multiple circles easily.
//Edit: Answer to the post under this one. Feel free to bump the topic ;)
There isn't much stuff I'm doing right now. But about 1 - 2 weeks ago I twittered that I'm rewriting the cpu emulation with types defined from stdint.h and that is almost done now.
I also improved performance here and there and started with some tests for a dynarec. I also did some more testing with zlib.
But right now there is not much happening. Can't be motivated to do any coding at the moment (atleast not that bug hunting which is necessary for Pandora-PSP).
Some good news are that I ordered a new PC which should arrive soon and the bad news is that not only my first PSP broke, but the other one followed meaning I have no real hardware to compare my stuff to.
The second PSP which broke atleast boots with Pandora battery so I'm sure that I will be able to recover it. The other one won't come back to life probably (Doesn't even charge battery anymore).
 
The circle/triangle fan type approach is interesting, it's similar to what I hinted to at the end about just finding the largest rectangle that I can fit in the middle of the sprite, although, more circular type primitive would probably work better just about all the time. I need a solution that works for more than just convex sprites, but the algorithm could be adapted to work in this situation. I think some sprites will naturally work better with this approach than others, the example given would work pretty well, a 'tree' on the other hand, that has a long thin trunk, branches coming off, etc. wouldn't work as well, but there's no reason not to implement a bunch of different techniques, then just let the code cycle through them and work out which is best and use that, on a sprite by sprite basis.

Thanks for your help, if anyone else has any other ideas then please join in!

Steve
 
Rockthesmurf said:
The circle/triangle fan type approach is interesting, it's similar to what I hinted to at the end about just finding the largest rectangle that I can fit in the middle of the sprite, although, more circular type primitive would probably work better just about all the time. I need a solution that works for more than just convex sprites, but the algorithm could be adapted to work in this situation. I think some sprites will naturally work better with this approach than others, the example given would work pretty well, a 'tree' on the other hand, that has a long thin trunk, branches coming off, etc. wouldn't work as well, but there's no reason not to implement a bunch of different techniques, then just let the code cycle through them and work out which is best and use that, on a sprite by sprite basis.

Thanks for your help, if anyone else has any other ideas then please join in!

Steve

Hm, maybe I should give it a shot; trying to find some algorithm.
The biggest issue that I can think of at the moment with the "circle approach" above, is that there will be unideal primitives that can be difficult to map textures to correctly (and you'll get aliasing issues at the edges of primitives).

So, here's how I would do it; my method uses a binary fractal algorithm kind of thing.

First, let's pick this as our sprite (the blue circle is transparent in case you can't tell):
baseq.png


Now, the algorithm will be a recursive one, and in my example, I assume that a sprite has a size of a power of two, but this is not required and the algorithm can easily be readjusted.

First, we need an image slicer that produces a slice fractal for us.
It will take the image and slice it into 4 slices. A slice can be one of 4 types: Fully transparent, semi-transparent, opaque, or mixed. The slices slices the input slice into 4 parts by dividing it by 2 horizontally and vertically, and then determining which of the types above the slice belongs to. Then, if one slice is of type "mixed", it will call the algorithm function again recursively with that slice.

The slicer will as its output generate a tree of slices; the slices will end up being as small as 1 pixel some times

Pseudocode:
Code:
function partition(Slice slice)
  slices[] = slice.sliceUp
  result = for each s in slices
    type = s.type
    if type == mixed
      yield partition(s)
    else
      yield Leaf(slice)
  return Node(result)

The next step of the algorithm is a slice pattern recognizer. This is what will yield the actual mesh data. The idea is that we want to take the slice tree and flatten it recursively. This will be done by using pattern recognition methods similar to the methods used by hqnx pixel art scaling algorithms. Basically, you have a function take a tree that yields some kind of shape information.

Pseudocode:
Code:
function patternize(Node node)
  if node.pattern == {#, #
                      #, #}
    return Filled(#)
  if node.pattern == {#, %,
                      %, %}
    return Slope45Dgr(#, %)
  else if node.pattern == {#, #,
                           %, %}
    return Horiz(#, %)
  else if node.pattern == {Filled(#), Filled(#),
                           Slope45Dgr(#, %), Horiz(#, %)}
   return Rounded(#, %)
  //.... etc ....
The % and # in this case are symbols for respective recursive calls to the patternize function; I just wanted to save space.

Of course, the code above uses abstract class-like constructs for everything, but in reality you probably would use some kind of vector structure.

Finally, you just generate mesh information from the shape meta data.

I ran this algorithm by hand (in GIMP), and ended up with this:
vectorized.png


...so, if you have a good pattern recognition algorithm, you can get close-to-ideal results by using an algorithm like this.
 
Last edited by a moderator:
Rockthesmurf said:
Does anyone know of an existing library/function/algorithm that will take a sprite consisting of RGBA channels, and generate two meshes, one of the regions that need alpha blending, and one of the opaque regions? I can think of a few ways to implement something myself, but I can also think of multiple reasons why the implementations would be less than optimal and over complicated. It feels like the kind of thing that should already exist but my Google skills haven't found anything overly helpful yet.

Steve

Sounds interesting.
Could you explain what you want to use this for?
If it's to speed up rendering on the PVR you might be better off rendering the whole sprite as alpha blended (with no alpha testing) after the opaque scene has been rendered.

[Edit] Sorry didn't read the other posts before posting so what I said could be utterly useless.
 
Last edited by a moderator:
The circle method could still be used if you separated the concave polygon into its convex parts. You could probably modify a convex hull creation algorithm to split it into convex pieces instead of just creating the hull. Then the circle algorithm can be run on each convex part. I'm not sure if this is necessarily faster than dflemstr's algorithm, you'd have to do a big-O analysis on both methods. It would probably be easier (even if it might be less efficient, which it very well might not be) to code, as you wouldn't have to worry about a good pattern algorithm.

Here's Wikipedia's entry for the Graham scan, a relatively quick (and fairly simple) algorithm that runs in O(n log n) time that will create a convex hull. It could be tweaked to allow you to split a concave object into convex objects fairly easily

EDIT: For any circular object, though, making a convex hull is not a terribly good idea.
 
Rockthesmurf said:
Okay, a few more details, one of the biggest performance factors I currently have with my iPhone engine is the fact the iPhone's fill rate isn't too impressive. It is good at hidden surface removal (based on Z depth) because it has a deferred renderer so it only has to draw each non alpha blended pixel once. However, a lot of assets I use have alpha blending, mainly on the edges, but can be on other areas also. When alpha blending is enabled you can really only draw two full screen alpha images on the iPhone if you want to renderer at 60 FPS (which I do). A result of this is that all the layered background in my games and baked down to a single image, this obviously means no parallax effects, and means it isn't so easy to have things interacting with the background (for example a bird flying behind a mountain, and in front of the sky, as the sky+mountain are baked to a single image). I have started looking at how I'm going to get this engine running on the Pandora, and I don't want to be limited by these same factors, on the flip side I am not keen on having to write different game code for iPhone/Pandora, so one obvious part solution is to improve the performance of drawing sprites with alpha in on the iPhone. So what I'd like to do is to be able to take a sprite, for example think of a human character, and generate two meshes for opaque and non opaque regions. By mesh I mean one or more triangle strips. The opaque mesh I can draw with alpha blending off (fast), and the rest can be drawn with blending on (slow, but hopefully there wouldn't be too much to draw, as generally the alpha blending is often near the edge of the sprite).
somewhat off-topic, but you gave me an incentive to finally do some more thorough fillrate testing of the iphone (i've been quite negligent to this device as of late, for shame).

your blending fillrate observations were confirmed, but i'm not sure it can be attributed to fillrate alone. according to my quick test, iphone's fillrate is further affected (rigidly, at that) by the source texture size:

* at 256x256 texture you get 2 full-screen draws at a constant 60 fps. a third redraw already brings hiccups.
* at 128x128 you get 3 draws at a smooth 60.
* at 64x64 you get 4 draws at a smooth 60.
* at no texture you get 5 draws at a smooth 60.

above seems to jive well with the apocryphal info that the mbx in the phone is clocked at ~50MHz, IIRC. if we assume a max rate of a single fragment per clock, that should allow 5*1e7 / (320 * 480) = 325 full screen draws/sec, or @60 fps = 5.4 (over)draw factor. moreover, that figure should not be affected by bandwidth, as those read-modify-writes into the framebuffer (i.e. blending) are actually carried in the high-speed local tile buffer.

finally, using a 'heavier' TU blending op seem to have no performance impact - results were the same for GL_REPLACE and GL_DOT3_RGB alike.
 
Last edited by a moderator:
darkblu said:
Rockthesmurf said:
Okay, a few more details, one of the biggest performance factors I currently have with my iPhone engine is the fact the iPhone's fill rate isn't too impressive. It is good at hidden surface removal (based on Z depth) because it has a deferred renderer so it only has to draw each non alpha blended pixel once. However, a lot of assets I use have alpha blending, mainly on the edges, but can be on other areas also. When alpha blending is enabled you can really only draw two full screen alpha images on the iPhone if you want to renderer at 60 FPS (which I do). A result of this is that all the layered background in my games and baked down to a single image, this obviously means no parallax effects, and means it isn't so easy to have things interacting with the background (for example a bird flying behind a mountain, and in front of the sky, as the sky+mountain are baked to a single image). I have started looking at how I'm going to get this engine running on the Pandora, and I don't want to be limited by these same factors, on the flip side I am not keen on having to write different game code for iPhone/Pandora, so one obvious part solution is to improve the performance of drawing sprites with alpha in on the iPhone. So what I'd like to do is to be able to take a sprite, for example think of a human character, and generate two meshes for opaque and non opaque regions. By mesh I mean one or more triangle strips. The opaque mesh I can draw with alpha blending off (fast), and the rest can be drawn with blending on (slow, but hopefully there wouldn't be too much to draw, as generally the alpha blending is often near the edge of the sprite).
somewhat off-topic, but you gave me an incentive to finally do some more thorough fillrate testing of the iphone (i've been quite negligent to this device as of late, for shame).

your blending fillrate observations were confirmed, but i'm not sure it can be attributed to fillrate alone. according to my quick test, iphone's fillrate is further affected (rigidly, at that) by the source texture size:

* at 256x256 texture you get 2 full-screen draws at a constant 60 fps. a third redraw already brings hiccups.
* at 128x128 you get 3 draws at a smooth 60.
* at 64x64 you get 4 draws at a smooth 60.
* at no texture you get 5 draws at a smooth 60.

above seems to jive well with the apocryphal info that the mbx in the phone is clocked at ~50MHz, IIRC. if we assume a max rate of a single fragment per clock, that should allow 5*1e7 / (320 * 480) = 325 full screen draws/sec, or @60 fps = 5.4 (over)draw factor. moreover, that figure should not be affected by bandwidth, as those read-modify-writes into the framebuffer (i.e. blending) are actually carried in the high-speed local tile buffer.

finally, using a 'heavier' TU blending op seem to have no performance impact - results were the same for GL_REPLACE and GL_DOT3_RGB alike.

Thank you for doing those tests, the information is very interesting. I don't own an iPhone/iPodTouch myself, I've just written cross platform GL/AL/file-io code, and someone else on the team has an iPodTouch and MAC that we compile the code up on, so it is hard for me to do any tests myself. I mainly just get third hand information or details from tech specs.

Either way the net result of drawing a smaller amount with alpha blending on appears to hold as a good idea, an interesting test would be to see how quickly the iPhone/iPodTouch become transform bound, which might be a bit harder to test. Although one thing that I do know is when I look through all my assets, just about all of them have the vast majority of pixels alpha set to 0 or 255, so it certainly feels there is a lot of scope for isolating the 1-254 regions, providing it can be done in a way that doesn't generate *too* many vertices!

Steve
 
Last edited by a moderator:
darkblu said:
... moreover, that figure should not be affected by bandwidth, as those read-modify-writes into the framebuffer (i.e. blending) are actually carried in the high-speed local tile buffer.

And where do you think the texture comes from?

What color depth is your texture? Have you tried using compressed texture formats (if available on the iPhone)?

If your drawing all your textures full-screen and higher res textures cause a slow down then to me that's suggests you might be hitting a bandwidth issue.

If I'm not talking sense ignore me ;) (sleepy)
 
Last edited by a moderator:
linuxhacker said:
darkblu said:
... moreover, that figure should not be affected by bandwidth, as those read-modify-writes into the framebuffer (i.e. blending) are actually carried in the high-speed local tile buffer.

And where do you think the texture comes from?

What color depth is your texture? Have you tried using compressed texture formats (if available on the iPhone)?

If your drawing all your textures full-screen and higher res textures cause a slow down then to me that's suggests you might be hitting a bandwidth issue.

If I'm not talking sense ignore me ;) (sleepy)
now that i reread my post i see how you interpreted it the way you did.

of course texture fetches are bandwidth sensitive. but by bandwidth i meant bandwidth available for framebuffer writes, at maximal fragment engine output. i guess my work baggage shows - for the past couple of years i've been working with architectures which have such a narrow framebuffer channel (vis-a-vis their fragment engine output) that it pays to disable depth writes whenever possible, lest there goes your max fillrate.

to answer your question - textures were 32bit, uncompressed. as i mentioned, it was a quick test - for the purpose i 'adapted' some other code of mine.
 
Last edited by a moderator:
I'm trying to understand what your test app is doing. Are you rendering these textures full-screen on top of each other (alpha blended)? If so then the amount of work that the render has to do is the same regardless of the texture resolution which would make me think that it might be a bandwidth issue in the texture pipeline as the higher the texture resolution the more memory accesses the texture unit has to do.

I guess it doesn't really matter as if that is the issue then what your doing trying to minimise the amount of alpha pixels would still help as then for most of the pixels you'll only have to do 1 texture lookup rather then however many textures you have on top of each other.
 
linuxhacker said:
I'm trying to understand what your test app is doing. Are you rendering these textures full-screen on top of each other (alpha blended)?
i alpha-blend N times a position-fixed spinnig cube, whose mesh happens to fill up the entire screen. now, due to my texture setup (GL_LINEAR_MIPMAP_NEAREST) and the spinning of the mesh, there's texure LOD logic involved too, but that should not be detrimental to the test as Rockthesmurf was looking for worst-case performances (and so was i).

If so then the amount of work that the render has to do is the same regardless of the texture resolution which would make me think that it might be a bandwidth issue in the texture pipeline as the higher the texture resolution the more memory accesses the texture unit has to do.
yes, chances are it is a bandwidth issue at the feeding end of the fragment pipeline. but as long as one uses mipmapping and no scaling, there should be a saturation point to the 'situation worsening' from the POV of the texture caches.

I guess it doesn't really matter as if that is the issue then what your doing trying to minimise the amount of alpha pixels would still help as then for most of the pixels you'll only have to do 1 texture lookup rather then however many textures you have on top of each other.
yes, minimising fillrate scenarios is a primary concern on the mbx. Rockthesmurf still has the option to try carry the parallax on-board the fragment pipeline - after all the mbx lite has 2 TUs. i, for one, though, do not expect much of a win from that, as (1) i expect the fragment pipeline's max throughput to scale down proprotionally with the tex lookups, and (2) as i already mentioned, the output end of the pipeline should not pose a bottleneck.

heck, i think it's about time i ported my full-scale fillrate test app to this thing : )
 
Last edited by a moderator:
Back