Direct (close-to-the-metal) open-source SGX driver


blu said:
the number of vertices also affects the number of poligons per scene. also, per-tile binning is a function of the size of the triangles, as their size directly contributes to the numbers of triangles in the bins. nothing differes in this regard from per-pixel binning (aside from the scale) - even the maximal bin size would be the same.

While that's true, per-tile is much, much less fine grained and the amount of work done per polygon would be much more stable.

blu said:
anyway, regardless of any binning policies, a TBDR needs to arbitrate pixel ownership within its working tile - whether it's done through per-pixel binning, or through other means, a scan-conversion is inevitable.

Of course it has to be scan converted later, but the point of TBDR is to make the working triangle set per tile much smaller to allow the deferred rendering procedure, and also to be able to throw out triangles that are completely obscured by another one in a tile.

Sure, pixel ownership has to be determined, but if it's done at the tile rendering stage then this is not information that has to be stored intermediately. But, even if it is, it only has to have buffers deep enough for one tile.

As far as binning is concerned, and assuming not too much internal fragmentation, per tile lists should take up less space than a typical screen resolution sized buffer, and for that matter less space than the display list itself. It should still be within hundreds of KB. I just don't really think that there's a ton of extra memory involved in binning and rasterizing simultaneously.
 
Exophase said:
Of course it has to be scan converted later, but the point of TBDR is to make the working triangle set per tile much smaller to allow the deferred rendering procedure, and also to be able to throw out triangles that are completely obscured by another one in a tile.

Sure, pixel ownership has to be determined, but if it's done at the tile rendering stage then this is not information that has to be stored intermediately. But, even if it is, it only has to have buffers deep enough for one tile.

As far as binning is concerned, and assuming not too much internal fragmentation, per tile lists should take up less space than a typical screen resolution sized buffer, and for that matter less space than the display list itself. It should still be within hundreds of KB. I just don't really think that there's a ton of extra memory involved in binning and rasterizing simultaneously.
i believe we may have misunerstood each other somewhere in the previous posts - i never assumed there would not be per-tile binning - everything i've said so far has been in the context of a tile. i was trying to hypothesise on what happes after geometry has been tile-binned, at the stage where pixel ownership resolution needs to occur. and i believe that was maciek's original point with the bitmap - pixel ownership resolution.

so, to recap the binning-ownership link:

  • scene geomtery gets distributed to tiles (tile binning)
  • for the working tile (one or several, depending on the GPU configuration)
    • for each pixel in the tile, resolve ownership, resulting in a 'top opaque occluder triangle id' and/or a bunch of 'above-top translucent triangle ids'.
    • for each triangle in the tile (preferably grouped by draw call), render its portion touching the tile (into tile mem), masking it with the pixel-ownership data.
    • tile done - copy over to the corresponding spot in the back buffer

i hope that clears any possible misunderstandings.

ed: i suck at formatting with tags.
 
I found a diamond in the rough (or two in fact). :)

Roaming through world wide web of information I've came across two PDF documents:

I leafed through both documents and it seems that blu was right on the money about 'tiling accelerator'. It does perspective division, and it's tightly coupled with memory hierarchy... It seems that 'tiling accelerator' is critical part of PowerVR architecture.

I'll post something more after more through digging/analysis. But don't wait - feel free to share your thoughts. :wink:
 
maciek_urbanski; Thanks for the links. I haven't gotten around to reading the first yet but the second was very informative. If SGX follows this model then it should be possible to feed the rendering stages without using the tile accelerator (and I would guess that it is necessary to manually drive the latter whether or not you supply it with TA style display lists). Of course this would require knowledge of the tile based display list format instead of the general display list format the TA would receive. I wonder how the two would really differ, if much at all.

I'm also rather puzzled by how "2Dvia3D" operates - since it is being done by the TA (that just generates display lists) it would seem that it is a preprocessor to turn 2D commands into equivalent ones and wrap down the feature set. The name 2Dvia3D would suggest this too. But this is a little troubling, since to perform "depalletization" it would have to cache the results and that would eat up memory coming from nowhere. I also wouldn't have expected the 3D hardware to be capable of the ROPs provided, and I don't see how translation could perform per-pixel effects.

What also seemed interesting is the inclusion of support for an external Z-buffer. This should make it possible to render a scene in multiple stages, although I'm not sure if this would ever be useful.
 
maciek_urbanski said:
ARM MBX™ HR-S 3D Graphics Core Technical Overview is a very informative (I dare to say 'probably complete') document describing internals of MBX.

Seeing this document made me realize that the old 2700G of the Dell X50V used to be a MBX core ( aka, the SGX530 being used is the older brother of this 2700G ). It was stacked with 16 MB of local memory with a 100 MHz, 32-bit bus (maximum 400 MB/s theoretical bandwidth ). Last sentence stolen from Wiki. ;) The performance was rated at 1M triangles/sec.

The Pandora's OMAP3530 has 128MB @ 166 Mhz * 2 * 4 = 1.3 GB/s. In other words, while bandwidth has increased 3 fold, the GPU's performance has increased to 10M
triangles/sec ( 14 on some sources ). In other words, thats about 10 to 14 times increase in performance, vs a only 3 time increase in bandwidth. Is the SGX530 not going to be terribly crippled by the lack of bandwidth? Thats of course not taking in account that the 128MB of ram now is shared, so bandwidth is going to be even more limited...

Btw, here are some more documents. I don't know if they are helpful, but they seem to have been updated until September 2008 ( did not notice these before using standard searches in the past ):
Texas Instruments OMAP35x SOM-LV
OMAP35x SOM-LV Hardware Specification

And for the image junkies amongst us:
1538UTH_1.gif
 
Exophase said:
maciek_urbanski; Thanks for the links. I haven't gotten around to reading the first yet but the second was very informative. If SGX follows this model then it should be possible to feed the rendering stages without using the tile accelerator (and I would guess that it is necessary to manually drive the latter whether or not you supply it with TA style display lists). Of course this would require knowledge of the tile based display list format instead of the general display list format the TA would receive. I wonder how the two would really differ, if much at all.
What would be the usage scenario that would benefit from emulating TA ?

Exophase said:
But this is a little troubling, since to perform "depalletization" it would have to cache the results and that would eat up memory coming from nowhere.
There is no benefit from caching it. Depaletization [256->16777216] can be done using one LUT with 768[bytes] (or 1024 [bytes] with padding) and it would fit comfortably into SGX cache. It takes only one data-dependent lookup to depaletize so caching is not worth the effort...

Exophase said:
I also wouldn't have expected the 3D hardware to be capable of the ROPs provided
ROPs are very cheap in silicon (low gates count) - and placing this in TA would result in not clogging pipeline by 2D processing. So it seems like good idea.

Exophase said:
...and I don't see how translation could perform per-pixel effects.
You lost me here... what translation ?
...or maybe you assumed that TA is doing only 3D work. This was my feeling until I've read those MBX docs. Clearly TA has some very simplistic micro-rasterizer inside. The question is - does SGX have it too ?

Exophase said:
What also seemed interesting is the inclusion of support for an external Z-buffer. This should make it possible to render a scene in multiple stages, although I'm not sure if this would ever be useful.
Sadly it's a necessity, because some APIs allow binding Z-buffer as texture (for effects like 'depth of field' or recent craze 'screen space ambient occlusion').

Edited: Some (obviousy idiotic) suggestion made by me was removed. :oops:
 
maciek_urbanski said:
Sadly it's a necessity, because some APIs allow binding Z-buffer as texture (for effects like 'depth of field' or recent craze 'screen space ambient occlusion').
why 'sadly', some very fundamental algorithms rely on reading from the z-buffer! like the depth-buffer shadows ; )

of course, now that the general pixel pipeline (down to the render target) has reached fp32 precision levels, reading from the actual z-buffer (traditionally the source of highest-precision depth info in the system) can step back in favor of general-pixel-attribute interpolants.
 
maciek_urbanski said:
What would be the usage scenario that would benefit from emulating TA ?

In practical terms, nothing I can think of (unless you had something else that was somehow more powerful than it at what it does, but that's doubtful). I just thought it might be useful to circumvent it for the sake of reverse engineering.

maciek_urbanski said:
You lost me here... what translation ?
...or maybe you assumed that TA is doing only 3D work. This was my feeling until I've read those MBX docs. Clearly TA has some very simplistic micro-rasterizer inside. The question is - does SGX have it too ?

This basically sums up what I was getting at. The document only describes the Tile Accelerator as being a display list processor - it takes in display lists and outputs display lists, performing clipping/culling, tile binning, and optionally geometry transformation. The name "2Dvia3D" implies that it takes 2D commands and outputs a 3D display list that will then perform the 2D operations via the 3D rasterization units (HSR, texture shader, pixel blender, etc). That the TA not only has access to the real framebuffer, but also has a scaling engine, alpha blending (both of these redundant), and ROPs, is really weird IMO. It seems as if the 2Dvia3D block should be later on down the pipe.
 
benjiro said:
The Pandora's OMAP3530 has 128MB @ 166 Mhz * 2 * 4 = 1.3 GB/s. In other words, while bandwidth has increased 3 fold, the GPU's performance has increased to 10M
triangles/sec ( 14 on some sources ). In other words, thats about 10 to 14 times increase in performance, vs a only 3 time increase in bandwidth.
SGX does vertex processing using hardware shader engines, MBX did it in on host CPU (or using additional (read optional) HW component called VGP).

benjiro said:
Is the SGX530 not going to be terribly crippled by the lack of bandwidth? Thats of course not taking in account that the 128MB of ram now is shared, so bandwidth is going to be even more limited...
Here(link) at page 146 (table 4-17) we can read that SGX will be clocked at 110.67MHz max. For 32-bit connection to L3 bus (it is not 64-bit, as I assumed before) it gives 422.16 MB/s = 0.412 GB/s.
If vertex processing takes 15 clocks minimum, this gives 7.378 M[tris/sec]. This means that for vertex formats below 50 [bytes/vertex] SGX should not be bandwidth limited on vertex processing.
Of course those calculations are based on marketing materials so there may be some 'inconsistencies' (and they assume that no pixel processing is done...).
It means, that if vertex shaders will occupy 30% of pipeline (of course depending on workload) for 30 fps it gives less than 74'000 polygons on screen (assuming long triangle strips). For screen 800x480 it gives 5.2 pixel/triangle, so it's still plenty detailed. But it also means that level-of-detail system for displayed geometry is a must (in 3D game at least).
Also - doing vertex processing on DSP seems more and more attractive...

benjiro said:
Btw, here are some more documents. I don't know if they are helpful, but they seem to have been updated until September 2008 ( did not notice these before using standard searches in the past ):
Texas Instruments OMAP35x SOM-LV
OMAP35x SOM-LV Hardware Specification
Wen I access http://www.logicpd.com it gives me empty HTML (maybe a region lock).
Can you attach this document to your post ?

THX for input benjiro. :)

Edited: SGX has 32-bit (bidirectional) interface to internal L3 interconnect, instead of assumed 64-bit. Changed calculations.
 
This is how the SGX can clock on Pandora: fixed 90MHz or on a 3, 4, 5, or 6 divider from the CPU clock. People have been using it at 166MHz successfully, and it's assumed that 200MHz will work. It can theoretically be clocked as high as 300MHz, but it's unknown if it'll hold at that speed. This information comes straight from MWeston, but I feel too lazy to dig up the post. Suffice it to say that we'll be able to run it at rates far surpassing 110MHz.

The memory bus on OMAP3530 clocks in at a fixed 166MHz, 32bit DDR SDRAM (DDR333).
 
maciek_urbanski said:
Wen I access http://www.logicpd.com it gives me empty HTML (maybe a region lock).
Can you attach this document to your post ?

THX for input benjiro. :)

Thanks for the information regarding the bandwidth issue.

The http://www.logicpd.com also gives me a empty page. Thats odd, i still had the original page open, and refreshing it also comes out blank. It showed several documents, one regarding hardware implementation of the OMAP3530 ( OMAP35x_SOM_HW_Spec ) with the one figured to be a pin layout description etc, a few other pdf's and another one with a 1MB zip file ( unable to get it, required login, and did not bother with it ). The zip file's description was about the hardware itself. I'm assuming the board implementation of the OMAP.

Found the OMAP35x_SOM_HW_Spec pdf still in the tmp dir.
Attaching it to the post was a no go, after some trial & error, finally, it here:
- pdf, 001 extension is not allowed on the board grrr.
- And another one: The file is too big, maximum allowed size is 256 KiB.

Placed it in a split zip. You need to recombine both, and just use any zip program to extract it.
 
Exophase said:
This is how the SGX can clock on Pandora: fixed 90MHz or on a 3, 4, 5, or 6 divider from the CPU clock. People have been using it at 166MHz successfully, and it's assumed that 200MHz will work. It can theoretically be clocked as high as 300MHz, but it's unknown if it'll hold at that speed. This information comes straight from MWeston, but I feel too lazy to dig up the post. Suffice it to say that we'll be able to run it at rates far surpassing 110MHz.
Please do post reference information, because sadly, OMAP does not seems to have any (settable) divider for SGX (see specs cited in link above).
It does however have divider for DSP and it can be clocked from 90MHz...
For reference: SGX clock is at fixed 1/3 divider from CORE_CLK_M2. As core clock is max. 166MHz, CORE_CLK_M2 is max 332MHz, and SGX is max. 110.6(6)MHz.
Overclocking core clock to 200MHz (if possible) would result in SGX working with 133.3(3)MHz clock (20.5% overclock).

But, please prove me wrong. It would be great if SGX could be clocked higher.

Exophase said:
The memory bus on OMAP3530 clocks in at a fixed 166MHz, 32bit DDR SDRAM (DDR333).
Yup, you're right - I must change post above...
 
maciek_urbanski said:
Please do post reference information, because sadly, OMAP does not seems to have any (settable) divider for SGX (see specs cited in link above).
It does however have divider for DSP and it can be clocked from 90MHz...
For reference: SGX clock is at fixed 1/3 divider from CORE_CLK_M2. As core clock is max. 166MHz, CORE_CLK_M2 is max 332MHz, and SGX is max. 110.6(6)MHz.
Overclocking core clock to 200MHz (if possible) would result in SGX working with 133.3(3)MHz clock (20.5% overclock).

http://www.gp32x.de/board/index.php?sh ... p=617853&#

He said the same thing to me personally a few days ago.
 
Last edited by a moderator:
Exophase said:
http://www.gp32x.de/board/index.php?showtopic=42277&st=95&p=617853&#
He said the same thing to me personally a few days ago.
Sorry for not being specific, but I was asking for some technical specification not a forum post...

I can agree, that there might be alternative way of clocking SGX (well - I would be very happy), but there is no mention of this in technical specification of chip maker. At the same time clock for DSP is set exactly as he writes in his post... so I have strong suspicion of mix up.
Please read those specs - they are very precise - they even quote skew tolerances on individual timing pins... omitting something so fundamental as a way of setting clock to SGX (giving it large performance boost) would be very serious mistake (or politics :)).
 
Last edited by a moderator:
Yeah, it's weird that it's not in the main doc. I honestly don't know what its information is about.

From OMAP35x Applications Processor 2D/3D Graphics Accelerator (SGX)
(http://www.ti.com/litv/pdf/spruff6b) page 10:

"SGX_FCLK is the functional clock and is used inside the SGX subsystem to generate SGX and 3D
domain clock signals.
The source of SGX_FCLK is either the PRCM clock (SGX_L3_FCLK, which is derived from
SGX_ICLK) or the peripheral DPLL clock DPLL4_M2X2_CLK as depicted in the SGX Power Domain
Clocking Scheme section of the Power, Reset, and Clock Management chapter. Selection is made at
the PRCM level by setting the PRCM.CM_CLKSEL_SGX[2] CLKSEL_SGX bit field. A divider of 3, 4,
or 6 is applied to the SGX_FCLK frequency with regard to the PRCM SGX_L3_FCLK frequency. By
default the SGX_FCLK clock is SGX_L3_FCLK / 3."

Unfortunately, it's really hard for me to dig up exact information on what the PRCM clock can be clocked to right now, maybe you can look over that more for me (this doc: http://www.ti.com/litv/pdf/sprufa5a) but as far as I'm aware it supplies the CPU clock directly. Either way, you can see that what I said about the dual clocks and divider is correct (although DPLL4_M2X2_CLK seems to be 96MHz, not 90MHz)
 
on the clocks subject:

i too seem to remember reading in some specs what Exophase has quoted here (MWeston's post), but when i tried to retrace the source, i came across the omap3530 (preliminary) datasheet (i believe already linked in this thread) which is clear on the subject (p.146, table 4-17, DPLL3 Clock Frequency Ranges, VDD2 OPP3):

SGX (running off the DPLL3 loop) is rated at 110.67MHz @ OPP3 (nominal), where 3530's OPPs go up to 5 - 'overdrive' (OPP4 being 'mid-overdrive').

now, re OPP5, the datasheet does not provide numbers for DPLL3, but judging from the OPP5/OPP3 overdrive ratio for DPLL1 & 2, namely, 6/5, we could conclude that DPLL3 goes up to 6/5 * 664 = 797MHz, and SGX, up to 133MHz, respectively. which agrees wih what maciek suggested earlier.
 
Hmm, very interesting thread :)

I can fill in some info about how clx2 works
The clx2 is broken into 2 parts, the TA (tile accelerator) and the CORE (rasteriser).

On dreamcast normally a game generates 3d geometry with the cpu (frame n+2), sends 3d geometry to TA using DMA (frame n+1) and renders a frame using CORE (frame n).Probably the SGX works in a similar 'pipelined' fashion...

TA takes as input screen-space vertex strips, quads ('sprites') and modifier volumes and converts em to the format CORE understands. That’s all it does really.

TA has inputs for fp32 colours (various modes), 8888 colours, 16 or 32 bit vertex coordinates and other formats. TA can handle strips of any size, single quads (well, it can do quad-lists but not quad-strips), and triangle-lists (for modifier volumes only).TA has some internal memory to store information per tile (iirc maximum render target size is 2048x2048, with 32x32 tiles).TA converts vertexes to the CORE format, splits strips to the CORE format (only up to 5 triangles supported natively), stores vertexes to memory and generates the display lists for the tiles (in a linked-list-like format).TA can also do some basic clipping (by not including the geometry data on tile lists, only works on 32 pixel units).TA calculates the bounding box of the triangle(s) and uses that to generate the display lists.

Region array (this must be generated by the cpu, its on vram)
The region array stores information about how CORE should render the tiles. For each tile it includes the number of passes to do, if the buffers should be cleared, and has pointers to the display lists.

CORE is split in two parts, the ISP (Image Synthesiser Processor) and TSP (Texture and Shading Processor).ISP does z buffering to an internal z buffer, then generates spans from it, RLE compresses the spans and sends em to TSP for calculation of the colour/etc. After all opaque triangles are processed alpha-tested triangles follow and then alpha-blended triangles. For alpha-tested polygons the ISP has to work in parallel with the TSP (to drop the pixels that fail the alpha test).For alpha blended polygons ISP does multipass processing (using layer peeling) and sends each layer to be rendered on the TSP in the correct order (always in RLE spans). This is done for all passes as described on the region array. After processing is done the tile colour buffer is written to the memory, with possible 2:1 down sampling (for 2x AA on the x direction) and colour conversion (internal buffers are argb8888, output can be 8888/4444/565/1555/0555). The z buffer is lost after that and processing for the next tile begins.

Tiles on CLX2 are 32x32.The pc version used 16x32 tiles.

The SGX should work more or less similarly. TA (or w/e its called now :p) is most likely fed directly from the geometry processing stage (vertex shaders or geometry shaders or whatever ...) and stores display-lists in ram . The rendering engine reads from ram, renders to internal buffers which are written back to ram when a tile is done. Z-buffer write back seems to be possible (it was possible on Kyro2 too) but its optional (and that’s why the docs suggest to clear the z buffer always). There's probably still some fixed function blocks for Z processing and such (doesn’t really make sense to waste programmable resources on em, that’s the 'ISP' part) and TSP seems to be totally replaced by the USSE units.

Also, about the 14 MTriangles that the SGX is supposed to handle, the CLX2 was marketed as up to 7 MTriangles and games never used 1 Million. The best games I have seen use around 800 K …
 
Interesting info as usual drk.

Only 800K polygons? That's much less than seems apparent. I wonder what was the limiting factor there. Maybe the SH4 in Dreamcast is not too tied down doing vector transforms all the time then (and thus less of a burden for emulation).
 
Well, the 800K is just a guess :p.Best games send ~ 1.1 MVerts, including the overhead of uniting quads-lists (50% iirc) and triangle strips (much lower, but close to 10%).Many games are in the low 200's.Quite a few games don't use the fpu at all (ie THPS serries, they used fixed point ;p)

*back on topic*

From the docs on the SGX sdk it seems that the texture filtering is fixed function too, and that paletted textures are very limited. CLX2 only had 1024 palette entries (configured either as 64x16, or 4x256). The pallete configuration/bank was selected per texture but the palette color format was global per render.

Also another interesting thing on CLX2 is that it keeps the entire render state in 12 bytes (TSP/ISP opcode,TSP opcode, TCW [texture control word] ). SGX probably uses a similar structure, storing also the micro program address (since the state has to be serialized for binning, it has to be compact ...).
It's also quite possible that the swizzling format hasn't changed, sizzzled textures are stored like this on CLX2

0 1 4 5
2 3 6 7
8 9 C D
A B E F

(0 - F are pixel numbers as stored on the ram)
 
drkIIRaziel said:
For alpha-tested polygons the ISP has to work in parallel with the TSP (to drop the pixels that fail the alpha test).
interesting. that sheds some light at one optimisation remark in the iphone programming guide, where they advise against alpha testing, suggesting alpha blending instead - that one has kept me wondering ever since.
 
Back
Top