Direct (close-to-the-metal) open-source SGX driver


Exophase said:
Unfortunately, it's really hard for me to dig up exact information on what the PRCM clock can be clocked to right now, maybe you can look over that more for me (this doc: http://www.ti.com/litv/pdf/sprufa5a) but as far as I'm aware it supplies the CPU clock directly. Either way, you can see that what I said about the dual clocks and divider is correct (although DPLL4_M2X2_CLK seems to be 96MHz, not 90MHz)
I'we read, and re-read document you've cited for a while and this is what I think. Sadly it's not all that good.
Yes, SGX functional clock (SGX_FCLK) can be either 96MHz CM_96M_FCLK or CORE_CLK divided by [3,4,6] (Figure 1-46 from doc you cited before). So I got my hopes up. :)
But then I found max. frequency of CORE_CLK, and it's sadly 332MHz. (This is from 'Table 4-17', from doc. provided by benjiro). :(
So - yes, SGX functional clock is settable, but according to specs if we manage to get max. CORE_CLK and set minimal divider, we still get 332MHz/3 = 110.6(6)MHz.
Of course this does not mean we cannot overclock it... :wink:
 
maciek_urbanski said:
Of course this does not mean we cannot overclock it... :wink:

Well, its rather safe to say, that the SGX is capable of 166Mhz without using the term overclocking. Also, another little detail i noticed, some doc's refer to the SGX, some talk about 10M, and other 14M, but few have any clock reference to it. If you assume the 10M are at 110Mhz, and recalculate to 166Mhz core, its almost 15M ( 14.9xx ). Might indicate the discrepancy between reports based on this SGX core.

From the look of it, they downclocked it to 110Mhz, for extra power saving. The same as the Cortex A8, running at 500/600Mhz, while most accounts seem to confirm that 900Mhz is no sweat at all.

Also, found this in a old thread of 2003 regarding the DC:

Topic @ http://forum.beyond3d.com/showthread.php?t=6199&page=4

"How Many Polygons Can the Dreamcast Render?

Let's help clear up some of the confusion that centers around the Dreamcast's polygonal rate. When SEGA first introduced the Dreamcast back in November 1998, they indicated that the machine could do 3 million polygons per second, which is a sustainable rate that could be gotten through software running on the machine at that time.

I shall direct your attention to this article at IEEE Micro, of which these quotes come from:

The CPU was clearly an important part of the Dreamcast specification, and selection of the device was a lengthy and carefully considered process. Factors considered included performance, cost, power requirements, and delivery schedule. There wasn't an off-the-shelf processor that could meet all requirements, but Hitachi's SH-4 processor, which was still in development, could adapt to deliver the 3D geometry calculation performance necessary. The final form has an internal floating-point unit of 1.4 Gflops, which can calculate the geometry and lighting of more than 10 million polygons per second. Among the features of the SH-4 CPU is the store queue mechanism that helps send polygon data to the rendering engine at close to maximum bus bandwidth.1 The final device is implemented using a 0.25-micron, five-layer-metal process.
The system ASIC combines a PowerVR rendering core with a system bus controller, implemented using a 0.25-micron, five-layer-metal process. Imagination Technologies (formerly VideoLogic) provided the core logical design and Sega supplied the system bus. NEC provided the ASIC design technologies and chip layout, including qualification for 100-MHz operation. Fill rates are a maximum of 3.2 Gpixels per second for scenes comprising purely opaque polygons, falling to 100 million pixels per second when transparent polygons are used at the maximum hardware sort depth of 60. Overall rendering engine throughput is 7 million polygons per second, but in Dreamcast, geometry data storage becomes the limiting factor before pixel engine throughput.

You're only as fast as your slowest component, so the DC is rated at 7 million polygons per second maximum sustainable rate, and in a game situation, would most likely be rated around 5 to 6 million polygons per second depending on how good a top developer would be at squeezing performance out of the system. I consider a rate lower than 7 mpps, simply because other game code has an effect on the polygon rate. The more complex the game AI is, the lower the polygon rate that the machine can achieve.
Note, the above quote contains some information, which could be easily misunderstood, as the above article states:

Fill rates are a maximum of 3.2 Gpixels per second for scenes comprising purely opaque polygons, falling to 100 million pixels per second when transparent polygons are used at the maximum hardware sort depth of 60.
No 3D game today even comes close to having an opaque overdraw of 60 times! It's more like 2 to 3 times of overdraw, so the comparative pixel rate would be 100 million to 300 million pixels per second maximum. I indicate comparative, as that means how an "infinite plane" architecture would be compared to a traditional architecture that renders every polygon in a scene.
Here is a very interesting comment:

Overall rendering engine throughput is 7 million polygons per second, but in Dreamcast, geometry data storage becomes the limiting factor before pixel engine throughput.
Let see, if the Dreamcast can render more polygons then it can store, and I will use 6 mpps as an example:
6,000,000 (polygons) / 60 (frames per second) = 100,000 polygons per scene
100,000 x 40 Bytes (size of polygon) = ~4 MB
Since the Dreamcast only has 8 MB of video memory, that is a lot of memory!
8 MB - 1.2 MB (640x480x16-bits double buffered frame buffer) - 4 MB (polygon data) = 2.8 MB
Only 2.8 MB left for textures, and even with VQ compression that is not very much. At 3 mpps per second, there is 5.8 MB available for textures, and that is much better. Just shows you, that there is not much point in creating a game engine on the DC that does more than 3 million polygons per second. Anyway 90 percent of the developers out there cannot even get over a million polygons per second on the Dreamcast."

The last part seem to confirm what drkIIRaziel was talking about.

Update:
maciek_urbanski, the site is back online from logicpd: http://www.logicpd.com/products/som/ti/omap35x
 
[a bit more offtopic]
Actually the cost of a typical (textured w/ 16 bit coords, colored) triangle is around 28 bytes (7 vtx per strip*20 bytes per vtx/5 triangles the strip contains) so vram is not that much of a problem. Games use small textures anyway ;p.DC can be setup to only store half a frame buffer (on interlaced mode), or only a 640x32 buffer in strip buffer mode (tiles are rendered as they are displayed but this mode is suboptimal for TA so it wasn't really used ..). The 1.4 Gflops figure is wrong -- the cpu can calculate that but the caches can't keep up to feed it.

As far as i know the CLX2 could sort an unlimited number of layers, just the rendering time was polygon count x layer count. I have seen no evidence that there is a limit to the layer count, and when using layer peeling the size requirements are independent of the layer count.
[/offtopic]

I wonder when the opengl|es driver will be released, the latest IDA had some very nice improvements on the ARM module :p.The rest of the Pandora system has open source drivers/ documented hardware ? If so, hacking an emulator together should be pretty simple (by re-using some arm core) and could really be helpful when trying to guess how the SGX works....
 
@drkIIRaziel WiFi/bluetooth chip is closed... but that shouldn't matter for an emulator afaik (for porting other OSs it would though...)

also it i think there *may* be some other closed HW but not really confirmed... since they want to deter knock offs
 
Something has been bugging me about tiled architecture and it still eludes me...

But first - some introduction.
I'm a hobby researcher in computational geometry 'space'. I read Seth Teller's work when it came out, grasped basics of Plucker space, I've written tetrahedralizators (man, I hate Steiner points) and finally fell in love, married and made hundreds of kids with voxels. I know my stuff. :)

In my opinion one of the most elegant innovations in graphics that is nearing its 'prime time' is (displaced) subdivision surfaces. They essentially allow:
  • fantastic geometry compression
  • easy dynamic level of detail
  • performance of bone systems increase hundreds of times
  • allow GPU-only implementation
  • computational complexity is shifted to resource compilation time
...and to get all those fantastic qualities GPU has to be able to do one thing: create geometry. It can do so via instancing (DX9, OGL ext.) using geometry shader (DX10, OGL ext.) or in future - via hull&evaluation shader (DX11).

My idea was to implement displaced subdivision on Pandora's SGX (when the close-to-the metal API will be complete-ish or required OGL ES extension will be implemented).

So - back to the SGX.

Some docs on the web claim that SGX family is DX10.1 compatible (that means geometry shaders), some others - that SGX is 'only' DX9L part (and that means instancing). But regardless of its real capabilities it should be able to create geometry.

But in 3D pipeline geometry creation must occur (and be finished) before triangles are associated with buckets. I'm assuming that lists contain references to triangles themselves, and if it is so - the triangle must exist somewhere in memory (regardless if it was received from application or generated by SGX). So it seems that in order to generate any geometry SGX has to push it down the memory subsystem and then read it back for tiling... it seems very bandwidth-inefficient, especially if bus connecting SGX to L4 is only 32-bit wide...

Or am I missing something ?
 
maciek_urbanski said:
So it seems that in order to generate any geometry SGX has to push it down the memory subsystem and then read it back for tiling... it seems very bandwidth-inefficient, especially if bus connecting SGX to L4 is only 32-bit wide...

Or am I missing something ?

If this was true..wouldn't that be an incredibly foolish function to implement? This seems to only hinder the performance of rendering.

To ask the question in a different way: What would be a reason to create such a bottleneck? Is CAD/Scientific simulation much different than rendering for video games?

(trying to follow as best I can)
 
Phawx said:
If this was true..wouldn't that be an incredibly foolish function to implement? This seems to only hinder the performance of rendering.
No - it wouldn't be foolish - because in tiling architecture there is no other way - at least I don't see any.
But it must be implemented (in any possible way) in order to hardware to be DX9 compatible (to pass Microsoft certification tests).

Phawx said:
To ask the question in a different way: What would be a reason to create such a bottleneck?
The reason is - it's a side-effect of tiling architecture. The problem is that all scene geometry must be created prior tiling. So if we want to use instancing for example (multiple vertex shader runs on the same mesh data - creates multiple instances of the same mesh - think 'multiple blades of grass') - all vertex data must be stored somewhere before creation of triangle lists for all tiles. And because instancing can generate a lot of data - the only place to store it is DRAM.

Phawx said:
Is CAD/Scientific simulation much different than rendering for video games?
Less and less so. In fact most most of applications creating lots of geometry in GPU are games. An example - first Far Cry (after official patch) generated trees and foliage using instancing. More up to date example would be Gears of War 2 for Xbox 360 - hordes of Locust are created using the same technology.
 
PowerVR needs to store screen-space transformed vtx's to ram (well, clx2 did .. and propably sgx does it that way too). So generating geometry on the cpu is gona be just as fast as sending it (in fact it may be slower due to USSE limitations).

The geometry b/w is *much* smaller than the bandwidth a frame buffer needs. PowerVR doesn't need to use the memory for frame buffer so it saves b/w from there.
It is possible to do tile based rendering without having to store the entire scene, but you can't do hidden surface removal (intel plans to use this on the larrabee gpu).

PowerVR manages well in low-geometry, lots of overdraw and slow ram combinations ;)
 
Geometry instancing and Displaced subdivision surfaces seem to be ways to minimize ram usage, but all this stuff that needs to be drawn still requires cycles from the SGX right.

For geometry instancing, you would only have one blade of grass in ram but draw it multiple times...right? And Displaced subdivision surfaces makes use of a carefully made low poly surface still faithful to its higher poly original through use of scalers?

Sorry to trouble you with silly questions, I am very much interested in following along I was just trying to wrap my head around:

1.) If I understand the functions correctly
2.) These techniques, when applied, what efficiencies do they offer?

Both of these technologies seem to me as a way to store less in ram and draw Nx the amount of objects you want on screen. Your only limiting factor seems to be the rendering processor.

Thank you guys for taking the time.
 
Phawx said:
Geometry instancing and Displaced subdivision surfaces seem to be ways to minimize ram usage..
actually host-to-GPU bandwidth.

..but all this stuff that needs to be drawn still requires cycles from the SGX, right?
right. the GPU (any, not just SGX) will still spend time drawing the instances/displaced surface.

For geometry instancing, you would only have one blade of grass in ram but draw it multiple times...right?
you can do that even without instancing, but you'd need to re-send to the GPU that same blade of grass multiple times. with instancing, the host sends the blade of grass once, and then sends ony the transformations for the instances.

And Displaced subdivision surfaces makes use of a carefully made low poly surface still faithful to its higher poly original through use of scalers?
not sure what you mean by scalers above. generally, it's like the current bumpmapping, but instead of improving just the shading of the surface, as bumpmapping does, displacement mapping actually achieves higher topological definition from a lower physical surface; displacement mapping actually modifies the geometry, both improving the shading and the spatial definition of the surface (think silhouttes and intersections).

ed: actually, the wiki article on the subject gives a fair account, includingly touching on the origins of displacement mapping.
 
blu said:
Phawx said:
Geometry instancing and Displaced subdivision surfaces seem to be ways to minimize ram usage..
actually host-to-GPU bandwidth.

..but all this stuff that needs to be drawn still requires cycles from the SGX, right?
right. the GPU (any, not just SGX) will still spend time drawing the instances/displaced surface.

For geometry instancing, you would only have one blade of grass in ram but draw it multiple times...right?
you can do that even without instancing, but you'd need to re-send to the GPU that same blade of grass multiple times. with instancing, the host sends the blade of grass once, and then sends ony the transformations for the instances.

And Displaced subdivision surfaces makes use of a carefully made low poly surface still faithful to its higher poly original through use of scalers?
not sure what you mean with scalers above. generally, it's like the current bumpmapping, but instead of improving just the shading of the surface, as bumpmapping does, displacement mapping actually achieves higher topological definition from a lower physical surface; displacement mapping actually modifies the geometry, both improving the shading and the spatial definition of the surface (think silhouttes and intersections).

Apparently PowerVR have a patent on something related to compressing the tiles (basically performing a partial Z elimination) before writing them out. Assuming they have fairly efficient local buffers, I assume that they compress the buffers before they have to spill them, so that odds are the many blades of grass are rapidly eliminated before having to be written back to local RAM.

In the bits describing Mali, they commented that this is a problem with TBDR and that they (and only they) could solve it, but I imagine that PowerVR have some neat ways of minimising the cost of this.
 
andys said:
Apparently PowerVR have a patent on something related to compressing the tiles (basically performing a partial Z elimination) before writing them out. Assuming they have fairly efficient local buffers, I assume that they compress the buffers before they have to spill them, so that odds are the many blades of grass are rapidly eliminated before having to be written back to local RAM.
That's a great point andys - let's analyze patents issued to Imagination Technologies. :)
The patent you're referring to is here(link). To see .PDF version free registration is required.

andys said:
In the bits describing Mali, they commented that this is a problem with TBDR and that they (and only they) could solve it, but I imagine that PowerVR have some neat ways of minimising the cost of this.
And in fact they did (sorta).
After (briefly) analyzing the patent cited above it seems that SGX has some fair amount of internal storage. If during generation of tile lists is this internal storage will be depleted - one of partial tile lists gets pre-rasterized. Output of this phase is a shorter list (they can rasterize only opaque triangles when list is incomplete) and a color, stencil buffers of pre-rasterized tile. Then - some buffers get compressed (1D differential coding on spans belonging to z-buffer) and stored in (main?) memory. This releases some memory from internal storage.
In this context it's possible to design chip such way, that generated geometry is written to local storage, and when local storage is depleted - partial rasterization is issued. I sure hope that's the case... Because if it is - I can implement displaced subdivision surfaces. :wink:
 
maciek_urbanski said:
That's a great point andys - let's analyze patents issued to Imagination Technologies. :)
The patent you're referring to is here(link). To see .PDF version free registration is required.

I found it somewhere on beyond3d.com - they have some interesting forums.

Not much about TBDR, but there is a little, and there are a couple of PowerVR people chatting on that board.

They do some pretty cool stuff, I reckon it would be an amazing company to work for.
 
And Displaced subdivision surfaces makes use of a carefully made low poly surface still faithful to its higher poly original through use of scalers?
not sure what you mean with scalers above. generally, it's like the current bumpmapping, but instead of improving just the shading of the surface, as bumpmapping does, displacement mapping actually achieves higher topological definition from a lower physical surface; displacement mapping actually modifies the geometry, both improving the shading and the spatial definition of the surface (think silhouttes and intersections).[/quote]

Ah, this makes perfect sense now. I was reading a MS article on DSS and the literature was very terse. After what you said, I put 'bump mapping' included in my search with DSS and I got a perfect understanding what you were saying.

I think Halo 1 for the original Xbox was the first game I actually recognized bump mapping.

Thank you.
 
Some docs on the web claim that SGX family is DX10.1 compatible (that means geometry shaders), some others - that SGX is 'only' DX9L part (and that means instancing). But regardless of its real capabilities it should be able to create geometry.

I might be able to shed some light on this....

There are 2 drivers for the SGX: One by ImgTec and one by TG (Tungston Graphics) - both are closed source.

The ImgTec drivers are the ones used by TI. When Intel chose the SGX for the new Atom, they weren't happy with the ImgTec driver (for whatever reason) and contacted TG to write a completely new driver.

So, given TG write Gallium3D, of course they based their driver on Gallium3D. The advantage of basing off of Gallium3D is that the architecture splits the various parts of the driver up into different parts. The interesting part is the state tracker - which implements the graphics API ontop of a hardware-independent interface to the driver. You implement the driver once and you automatically get support for lots of different state trackers/APIs. It also means to support a new API (like GL3), you only need to implement a new state tracker. TG has various state trackers, OpenGL, OpenGL ES & Direct3D-9.

So, my guess is that the hardware _may_ be capable of Direct3D 10, it's just that TG haven't implemented a D3D10 state tracker yet.


I guess any open source driver is going to be based on Gallium. So, what's needed is:

1) DRM/Kernel module for synchronising command buffer execution
2) Gallium user-space Driver (Which turns TGSI into SGX instructions)

I guess what you're talking about is implementing 1) for the SGX on the OMAP. But, reading what you said about the source you posted half-a-dozen pages back from the windows package, most of the DRM part seems to be avaliable already? It "just" needs to be adapted to the way the SGX is conntected on the OMAP3 (which is radically different to the atom). :)

I also recommend keeping up with the TTM/GEM/mm discussions on the dri-devel mailing list.
 
TomCooksey said:
1) DRM/Kernel module for synchronising command buffer execution
2) Gallium user-space Driver (Which turns TGSI into SGX instructions)

I guess what you're talking about is implementing 1) for the SGX on the OMAP. But, reading what you said about the source you posted half-a-dozen pages back from the windows package, most of the DRM part seems to be avaliable already?
I'm thinking primarily of describing how to initialize SGX, prepare data (textures, state, shaders) in memory, and issue execution command. This means deciphering MMIO registers, understanding data/control flow inside SGX, deciphering data structures, formats, ISA of shader engines, and the like.
In the process of doing so I will write lots of test applications, that (after a while) will program SGX directly. At beginning tests will perform trivial functions (initialization, screen clear, simple shaders, maybe capture&replay), but they will use some shared code base. When I wrote about paper-thin API - I was thinking about this code base.

Sadly - inside module you're referencing to I found only functions for mapping physical pages into SGX MMU and code for memory mapping. I have feeling that user-space module creates mapping of MMIO registers using kernel module and programs them directly... But I think I may be missing something, because I don't know much about DRI internals.
So I'm shamelessly asking if you could give me/us some introductory tutorial. :wink:

TomCooksey said:
I also recommend keeping up with the TTM/GEM/mm discussions on the dri-devel mailing list.
That's a great idea. :)
 
I found on wikipedia here some reference documents about DRI.

A small and very basic introduction of DRI by Adam Jackson is here. Here is the DRI2 design pages.

I think doing your implementation for DRI2 + DRM (Direct Rendering Manager instead your own user space driver or kernel module could give benefits to everyone. It could be used easier for a Gallium3D driver and a standard in the Linux world is used instead a new custom API.

What do you think? I have too few basic knowledge about this, so sorry if I'm wrong about this or wrote some weird nonsense.


The following text is quoted from wikipedia, I put it here for other people can know what DRI and DRM is in a faster way.

The Direct Rendering Manager (DRM) is a component of the Direct Rendering Infrastructure, a system to provide efficient video acceleration (especially 3D rendering) on Unix-like operating systems, e.g. Linux, FreeBSD, NetBSD, and OpenBSD.

In computing, the Direct Rendering Infrastructure (DRI) is an interface and a free software implementation used in the X Window System to securely allow user applications to access the video hardware without requiring data to be passed (slowly) through the X server. Its primary application is to provide hardware acceleration of the Mesa implementation of OpenGL. It has also been adapted to provide OpenGL acceleration on a framebuffer console without an X Server running.
 
timofonic said:
I found on wikipedia here some reference documents about DRI.

A small and very basic introduction of DRI by Adam Jackson is here. Here is the DRI2 design pages.
Thanks for the links. I'm back in job, so Pandora-related work is done mostly in weekends. But I'm going to start familiarizing myself with DRI. :)

timofonic said:
I think doing your implementation for DRI2 + DRM (Direct Rendering Manager instead your own user space driver or kernel module could give benefits to everyone. It could be used easier for a Gallium3D driver and a standard in the Linux world is used instead a new custom API.
Yes, I guess you're right. But I have to write tests in order to decipher SGX, so some (even if internal) API will be created.
But most important - the real output of this 'deciphering' would be documentation how SGX works and how to program it (API would be only an 'extra').
Basing on this work guys that 'live & breathe' DRI can write SGX driver. I'm sure that they will be better at this. I don't rule out participation in developing DRI driver, but I don't feel strong urge to do so, either.
Besides - I want to write some demos/games for Pandora showing off capabilities of SGX. This would be my next step after/during deciphering SGX. :)

timofonic said:
What do you think? I have too few basic knowledge about this, so sorry if I'm wrong about this or wrote some weird nonsense.
I think you're right, that somebody should write SGX driver for more 'standard' API. And no - you did not write nonsense. :)
 
Back
Top