Direct (close-to-the-metal) open-source SGX driver


I have some new ideas/findings.
  • In OMAP35x 2D/3D Graphics Accelerator Reference Guide-TRM Ch 13 (Rev. B) at "1.1.2 SGX 3D features" we can read that SGX supports Multiple on-chip render targets (MRT). Note: Performance is limited when the on-chip memory is not available.
    ...it remains to be seen if SGX3530 will have this on-chip memory and if not - what will be the impact on MRT.
  • US patent #7324115 called Display list compression for a tiled 3-D rendering system was filed at the beginning of 2004, so it seems that it describes accurately innards of SGX. It's a wealth of knowledge:
    • lists are self-sufficient, they do not need to back-reference vertices
    • lists can be compressed using both lossless (for indices) and lossy quantization + delta encoding (for 3D coordinates, normals, colors); compression can be quite heavy (color values in range <0;1> can be clipped to discrete 0..255 set - and remember we are talking about color output from vertex shader); compression can be tuned and even switched off (per attribute)
    • ...this all means, that during geometry processing (think vertex shader, geometry shader and clipping) followed by binning geometry is created and stored in list itself (and id does not reference input data). This means that during geometry processing we can create new data. This means SGX should be able to do instancing & geometry shaders. :wink:
  • US patent #7362328 called Interface and method of interfacing between a parametric modelling unit and a polygon based rendering system was filed at end of 2006, so algorithm described here probably did not make into SGX530. Anyways it's non-programmable HW subdivision unit using B-spline patches (crude&simple). From algorithmic standpoint is inferior to hull & domain shader (from DX11) and because it is not programmable it will not comply with DX11. But again - I wound no explicit statement that it's HW solution (but awfully lot of hints).
  • other US patents (like #7370158 and #7428628 ) seem to suggest that newer versions of SGX will have true 128-bit SIMD (read as 4 x 32-bit float). Those patents suggest how to hide latencies when memory interface is 32-bit and internal bus is 128-bit. They also patented technique how to handle (nested) conditionals (if-else) in SIMD. So it seems that SGX531 (see here(link)) will have 4 shader units each being 128-bit SIMD. And I wish them well with the drivers. SIMD compilers are different league than the stuff they did in the past.

So - sorry for all typos, I'll clean this tomorrow. I have party to attend to. ^_^
 
Very interesting as usual.

Kind of strange that they'd start doing real vector processors after doing scalar when the industry seems to have moved in the opposite direction (although, VGP was 128bit SIMD, but I don't think ImgTech did that anyway).

Doesn't seem like it should be a big deal for compilation though, at least for the usual kinds of graphical things that do naturally involve vectors. That is, I wouldn't expect vectorization to be that necessary.
 
Exophase said:
Kind of strange that they'd start doing real vector processors after doing scalar when the industry seems to have moved in the opposite direction
i'm not so sure about that; a simd unit will always be cheaper than the same number of independent ALUs (possibly doing mimd). if your dataflow is vector-ish in nature, simd will always be a viable option. of course, you may want to make your simd units more flexible re operand feeds (the simd 'scatter-gather' discussion we already had in this thread) and improve register-file access (higher efficiency through eliminating swizzling where possible). btw, psp's VFPU offers an interesting take at the latter.

(although, VGP was 128bit SIMD, but I don't think ImgTech did that anyway).
do you know somehing here that we don't or is that just a guess (the imgtech part)?

Doesn't seem like it should be a big deal for compilation though, at least for the usual kinds of graphical things that do naturally involve vectors. That is, I wouldn't expect vectorization to be that necessary.
vectorization may not necesserily be a burdeon solely on the compiler - a context scheduler would have a lot to do with that too (at runtime, that is).

ps: on a tangential sidenote, intel's GEM got upstreamed to the current linux kernel the other day. yay.
 
blu said:
i'm not so sure about that; a simd unit will always be cheaper than the same number of independent ALUs (possibly doing mimd). if your dataflow is vector-ish in nature, simd will always be a viable option. of course, you may want to make your simd units more flexible re operand feeds (the simd 'scatter-gather' discussion we already had in this thread) and improve register-file access (higher efficiency through eliminating swizzling where possible). btw, psp's VFPU offers an interesting take at the latter.

Of course an N-way SIMD unit will be less expensive than N independent units. It'll also be much less flexible and the overall throughput will be less.. is this really a fair comparison? nVidia and ATi have both moved to non-vector units (although ATi has multi-issue superscalar ones). It's just strange that the trend is going in the opposite direction, but I guess where power consumption is a big factor it makes sense for other reasons too.

PSP isn't the only SIMD unit that has facilities for reordering, although I haven't seen anything that's as general/extensive as it is. NEON can write back in non-contiguous ways too (although to memory, not register file).

blu said:
do you know somehing here that we don't or is that just a guess (the imgtech part)?

I'm not making it up, but I don't remember where I got the exact information. Does it really matter though?
 
Exophase said:
Of course an N-way SIMD unit will be less expensive than N independent units. It'll also be much less flexible and the overall throughput will be less..
they surely will be less flexible, but the throughput bit is not clear, as it does not follow from the flexibility part. by simd's virtue of being simpler, you could either:
a) use those units at 'casual' clocks aiming for a power consumption advantage (as you mention later in your post), or
b) push those units to higher clocks (where a similar number of independent units could not go), in which case the matter of throughputs becomes one of statistics, and ultimately, a function of the nature of the workload at hand.

speaking of which, being simple and, at the same time, fit for the task at hand, is a good thing. and simd just happens to be so in the world of vector crunching.

..is this really a fair comparison? nVidia and ATi have both moved to non-vector units (although ATi has multi-issue superscalar ones). It's just strange that the trend is going in the opposite direction..
to what extent the trend is 'in the opposite direction' is a subject to debate. allow me to bring to your attention the following original b3d analysis piece on nvidia's g80, where on page 8 one can read the following bit:

b3d article said:
"Inwardly, each 16 SP [ed: scalar processor, or shading processor - your pick] cluster is further organised in two pairs of 8 (let's call that 8x2) and the scheduler will effectively run the same instruction on each half cluster across a number of cycles, depending on thread type. [ed: emphasis by me]"
now, the above is surely speculative, but it's both educated speculation (the b3d folk really take their pride in such stuff), and one non-contradicting with common sense, thus i'm willing to bring it into our otherwise pragmatic discussion.

PSP isn't the only SIMD unit that has facilities for reordering, although I haven't seen anything that's as general/extensive as it is.
neither have i, but then again, i've never seen face-to-face the actual set-in-metal architecture of a contemporary GPU shader unit, which i suspect would share similar levels of register-file addressing advancement, given it existed in the simd domain.

I'm not making it up, but I don't remember where I got the exact information. Does it really matter though?
it clearly does not matter in this thread; i was just really surprised to read that bit, though, thus my asking.
 
Exophase said:
Kind of strange that they'd start doing real vector processors after doing scalar when the industry seems to have moved in the opposite direction
That's not entirely accurate:
  • Nvidia uses 16-way or 32-way SIMD, but they call it 'warps' (see here(link), in 2.2.1. Cooperative Thread Arrays)
  • AMD/ATI uses 5-way VLIW (see here(link), at the bottom of 2.1 The Stream Processor)
  • Intel GMA (probably) uses 8-way SIMD (see here(link)), but other sources seem to indicate that it executes 128-bit op/clock (see here(link)) on slide #7).
  • Intel Larrabee will (probably) use 16-way SIMD (see here(link))
  • As we deciphered in this thread SGXes (probably except SGX531) are 32-bit (sudo-SIMD, because they can execute 4 parallel 8-bit ops, but in floating-point era this should be seldom a case)
So - we have all flavors, SIMD being most dominant one.

Exophase said:
(although, VGP was 128bit SIMD, but I don't think ImgTech did that anyway).
I'm too very interested where did you read such thing. So please provide link to documentation, because I've read a lot about MBX and every doc plainly stated that VGP is 128-bit (4 x float) SIMD and it executes (probably most of) instructions defined by DirectX VS spec. (for example see here(link))

Exophase said:
Doesn't seem like it should be a big deal for compilation though, at least for the usual kinds of graphical things that do naturally involve vectors.
Yup, compilation of VS/PS shader code on 128-bit SIMD machine can be as simple as direct translation of shader assembly provided by runtime. But this would be wasteful and inefficient and nobody really is doing this that way. You get up to 4x better performance performing things via multiple-threads-in-one-SIMD-register. But then, task of writing compiler becomes something entirely different.

I don't want to offend anyone, and believe me when I say that I value (almost) every opinion, but if we want to reverse-engineer unknown architecture, we must reference either some working source code, official docs or findings based on official docs. Of course docs can be 'not entirely accurate' (some of them are written by marketing departments, after all), but until we prove them wrong - we should assume that they tell the truth.
So - 'word of mouth' or guesswork (at least not referencing any docs) only introduces more unknowns.

When we get our Pandoras we can possibly prove some docs wrong via experimentation, but until then - let's stick to docs.
 
I found 'allegedly working' 2D SGX driver for Ubuntu Hardy. :)
Module sources:
FYI: .deb packages can be decompressed using 7-zip (download it here(link)).

If someone wants to help with deciphering innards of SGX - please drop me a PM.

My initial idea of (OS-independent) development would be (I'm open to discussion):
  • git/msysgit for source-control
  • Doxygen for documentation (with GraphViz extension)
  • MmioTrace for (initial) tracing
  • development code in form of:
    • multiple (hierarchical) headers (registers, masks, defines), heavily commented using Doxygen
    • thin API referencing header (dynamic & static lib)
    • set of replay-able tests using API (to be used with trace MmioTrace, or other tracing tools)
  • release code in form of:
    • single header created directly from development code using a OS-specific preprocessor (I'll do WScript-based Windows preprocessor)
    • thin API referencing header (dynamic & static lib)
  • documentation created directly from development code

BTW - Something will be hampering my progress in near future. It's scary, bloody and impossible to resist. It's called Dead Space. :D
 
Getting basic 2D acceleration (double buffering, alpha blitting) is something I'd like to see, especially since I don't know if the official drivers even support this and it will make my GUI faster. (OpenGL and OpenVG has some overhead)
 
maciek_urbanski said:
I found 'allegedly working' 2D SGX driver for Ubuntu Hardy. :)
Module sources:
FYI: .deb packages can be decompressed using 7-zip (download it here(link)).
Why bother with binary packages when you can get source directly: http://git.moblin.org/cgi-bin/gitweb/gi ... ;a=summary ?

Have fun :)

PS : you can also take a look here : http://git.moblin.org/cgi-bin/gitweb/gi ... ;a=summary
 
Laurent said:
Why bother with binary packages when you can get source directly: http://git.moblin.org/cgi-bin/gitweb/gi ... ;a=summary ?
Those .deb packages contain sources. :D

...but you're right, gitweb is better. And those drivers are in newer version. :D

Thanks Laurent.

PS: Funny part is, I've just found those repositories minutes before I've read your post...
PPS: The last change in psb-kmd is from Thu, 13 Mar 2008; version in git is 0.26, but in their repository(link) version is 0.33... Even more funny - diff shows that 0.26 is newer than 0.33... :p Maybe they're numbering versions in descending order... :lol:
 
behold my psychic powers: i quote a post from the future!

Phawx said:
Also, my last question. Does the SGX have features that compensate for such slow transfer rates?

TBDRs are exemplary bandwidth savers. it stems from their on-board framebuffer scheme - the tile buffer - which saves them a truckload of bandwidth at any read-modify-write (i.e. framebuffer blending) scenario - TBDRs just don't RMW with the actual render target. additionally, in most scenarios a TBDR does not need depth component with a framebuffer, which saves a lot too. in contrast, on some modern embedded IMR GPUs, where 64bit buses are not uncommon, writing a depth attribute with your fragments can cost you considerable fillrate in light-weight shader scenarios.

edited for marginal usefulness.
 
Sorry if being a bit offtopic but...

The new 256mb ram mean that the PowerVR SGX performance will be improved?
 
timofonic said:
Sorry if being a bit offtopic but...

The new 256mb ram mean that the PowerVR SGX performance will be improved?

Heuu, no... The 256MB of ram means, that games who's memory limitations was a problem, can be played. It also means, that there is less work needed ( downscaling textures etc ) to shrink/optimize those games.

The default bandwidth, latency, etc are normally unchanged. Unless TI changed something in the process ( don't think so ). So far, the information about the 256MB upgrade is limited. Most things like the extra battery drain etc, are pure guesswork, based on doubling the 128's figures. But, like it was said, nobody knows if TI optimized the memory by going with a smaller ( 45nm ) manufacturing process, or if they did any other changes in the design for the 256MB part ... In theory its possible that they might have altered a few things, seeing as the 256MB versions only recently taped out in mass production, contrary to the 128MB version. But in most cases, no manufacture is going to risk playing to much with its design in a few months time.
 
It'd be great, but i don't think so. As the OMAP itself should be capable of even greater amounts of ram and the memory is just stacked upon it, they probably just took a greater one.
 
benjiro said:
The default bandwidth, latency, etc are normally unchanged. Unless TI changed something in the process ( don't think so ). So far, the information about the 256MB upgrade is limited. Most things like the extra battery drain etc, are pure guesswork, based on doubling the 128's figures.
All information is on Micron site here(link).
Basing on OMAP3530 specs it seems that Pandora team switched memory from 1Gbit (128MB) MT46H32M32LFJG-6(link) to 2Gbit (256MB) MT46H64M32L2JG-6(link).
Both are x32 LDDR333, so memory bandwidth is still 4*2*166M = 1266.5MB/s.
Sadly, for both parts datasheets are not available (but you can send a request). Looking at similar parts (VFBGA package instead of POP) power consumption should be in fact greater. Burst read current in 256MB part is up to 20% greater, but deep power down current is the same... So all depends on usage scenario. It even can conserve power, because Linux kernel will use more memory for cache, thus limiting re-reads form CF-cards and flash...

And now - let's go back to the topic. :p

Edited: It's official - I can't do math. I've changed memory sizes to correct ones. :oops:
 
maciek_urbanski said:
And now - let's go back to the topic. :p

Any news about your RE effort or not until your receive the Pandora or any other OMAP3530 based device?

Have you considered ask for someone to donate you a BeagleBoard?
 
timofonic said:
Any news about your RE effort or not until your receive the Pandora or any other OMAP3530 based device?
We're currently in organizing phase, and 'in the meantime' we're analyzing aviailable code & documents.

timofonic said:
Have you considered ask for someone to donate you a BeagleBoard?
No... Frankly, I don't feel comfortable somebody donating BB. :oops:
Besides, there's plenty of work to do before getting Pandora (analyzing, organizing, setting-up OS-independent build environment, etc.).

Anyone interested in joining in - feel free to PM me. There's lots of work, and some is just running lots and lots of tests (that somebody else will write)...
So everybody can participate. :)
 
maciek_urbanski said:
Both are x32 LDDR333, so memory bandwidth is still 4*2*166M = 1266.5MB/s.

Where are you getting 4 and 2? DDR333 is 166 Mhz. They are 32 bits wide. 32*166/8= 664MB/s

If they were running in Dual Channel mode you go to 64 bit wide and get 64*166/8= 1,328MB/s

I looked up the ram you linked, Is the (2) coming from the prefetch? Which means that the RAM is not running in dual channel mode but the performance increase is from transferring two data words per clock cycle?

Where are you getting 4 from though? Is that 8/32bits to get 4 bytes?

It's too bad dual channel isn't implemented...though I don't know if you can combine these technologies.

Also, my last question. Does the SGX have features that compensate for such slow transfer rates?
 
TI's OMAP353x Community Forum ( Never noticed it before. Dumb luck using google ;) ): https://community.ti.com/forums/32.aspx

Some links from in the threads:
Another pdf i stumbled upon, 265 pages, have fun ( might not be new for most, but better safe then sorry ;) ): http://focus.ti.com/lit/ds/symlink/omap3530.pdf

OMAP35x Home Page (links to various hardware specific OMAP documents) -- http://www.ti.com/omap35x

BeagleBoard.org (low cost OMAP35x based board targeted at the open source community) -- http://beagleboard.org/

OMAP35x EVM from Mistral (documentation and FAQs specific to the EVM board) -- http://www.mistralsolutions.com/busines ... p_3evm.php

OMAP35x EVM Registration Page (register your EVM and access the OMAP35x software updates) -- http://www.ti.com/omapregistration

http://opensource.ti.com and http://linux.omap.com are also a valuable resources.
 
Phawx said:
maciek_urbanski said:
Both are x32 LDDR333, so memory bandwidth is still 4*2*166M = 1266.5MB/s.

Where are you getting 4 and 2? DDR333 is 166 Mhz. They are 32 bits wide. 32*166/8= 664MB/s
First I use MB for mega-byte, Mbit for mega-bits.
DDR memory transmit one bit during rising and falling clock edge. Thus for each clock cycle it transmits 2 bits. OMAP3530 have 32-bit external memory interface. 32 bits are 4 bytes.
So 32 [bits] * 2 * [bits/clock] * 166MHz = 4 [bytes] * 2 [bits/clock] * 166MHz = 1266.5MB/s.

Phawx said:
If they were running in Dual Channel mode you go to 64 bit wide and get 64*166/8= 1,328MB/s
When connecting to POP (stacked) memory OMAP3530 uses 32-bit interface, so no dual channel mode is possible. AFAIK it does not have 2 DRAM interfaces, so no dual channel mode is possible regardless of used memory.

Phawx said:
Where are you getting 4 from though? Is that 8/32bits to get 4 bytes?
It comes form 32 [bit] / [8 bits/byte] = 4 [bytes]

Phawx said:
It's too bad dual channel isn't implemented...though I don't know if you can combine these technologies.
Yes you can. But there is not enough pins in POP (stacked) memory interface, and AFAIK OMAP does not have second memory interface. So HW redesign on both CPU and memory packaging side would be in order... not feasible.

Phawx said:
Also, my last question. Does the SGX have features that compensate for such slow transfer rates?
SGX is a tiled architecture, so it's very bandwidth efficient. Additionally it has a fair amount of caches, shader engines have hardware threading (which itself significantly reduces latency).
So yes it has. :)
 
Back
Top