Reverse Engineering Powervr Is Now A High Priority


CPUnltd

Active Member
Joined
Jun 27, 2010
Messages
790
Age
44
Location
Milwaukee, WI, USA
Just thought I'd alert the community around here about this article surrounding the very GPU that Pandora is utilizing and the fact that it's generally becoming the commonplace embedded GPU at the moment (till one of the open hardware groups comes up with a nice embedded GPU to play with :p)...

http://www.phoronix.com/scan.php?page=news_item&px=OTEwMA
 
So the Pandora also could benefit from this? ^^ Sounds nice. Does this mean, we could get a driver that really make usage of all the graphics Power, the Power VR offers? And we will have a more optimized Driver that is better integrated in the Linux OS?
 
The way i see this is that when if it happens it will most likely be slower than the blob, but would allow full opengl (possibly up to 3). So we gain functionality at a cost for speed.
But I think this is years away from anything. (Reading the mail list thread didnt give me much of a good feeling)
 
fusion_power said:
So the Pandora also could benefit from this? ^^ Sounds nice. Does this mean, we could get a driver that really make usage of all the graphics Power, the Power VR offers? And we will have a more optimized Driver that is better integrated in the Linux OS?

Just the opposite. When it comes out (2-3 SGX generations from now) out it will slower and more buggy ...
 
Last edited by a moderator:
So lkcl posted on the celinux mailing list a call for reverse engineering PowerVR SGX, and someone ran with this and made it a high priority task for FSF... but who is actually working on this? Maybe it'll be something to be excited about when we see real engineering.

Speaking of which, the wiki page says the SGX USSE ISA has been reverse engineered.. I wonder where information on this can be found. Seems like a useful thing to link to.

I do want to know what all of this "we should use LLVM to leverage multi-core CPU" stuff is about >_>
 
warmi said:
fusion_power said:
So the Pandora also could benefit from this? ^^ Sounds nice. Does this mean, we could get a driver that really make usage of all the graphics Power, the Power VR offers? And we will have a more optimized Driver that is better integrated in the Linux OS?

Just the opposite. When it comes out (2-3 SGX generations from now) out it will slower and more buggy ...
Sounds like "Open Source" and "good Drivers" are still 2 different things? :(
I had read something that someone wanted to reverse engineer this stuff for the Pandora years ago, seems there has not much happend in this case.

I just thought that if we had better drivers, we would get better and faster Software with less "coding around crappy drivers". ^^"
 
Last edited by a moderator:
A movement is a movement, and Nvidia is a testament to how the process of reverse engineering can go on GPUs, but I think the mentality that it's always a process akin to slow-grinding gears is an ideology that hinders progress. This is just my opinion. I'm not expecting excess hype and enthusiasm, because that's like the sugar rush from eating too many sweets at once... it's great while it lasts, but the feeling is short-lived and after the crash, you just don't wanna be bothered anymore.

But I would love to see some real (and realistic) enthusiasm behind this move... because it's likely that this GPU will be on the next iteration of the Pandora unless they go with the Tegra line from Nvidia or whatever AMD ends up offering down the line. Or if we get an open source GPU somewhere in the mix. But none of that has been put into play yet... the PowerVR project is about the only thing that looks in our favor (no matter how far down the line it may be) in terms of OPT keeping costs down and offering good graphics power while promoting themselves as open source as possible.

But developments could come at any time. So this is essentially an "awareness" post more than anything else... but I hope it does do better than the nuveau project, though they are making some good headway as of late.
 
Hmm. Ok. ^^
But would it be a good Idea to send the PowerVR SGX Company a Pandora and ask them if they could do a good Driver for it? ^^
I just think if the official Drivers would be perfect, we don't even need Open Spurce Drivers. :)
I really wonder why Companies makeing all these nice Chips and Chipsets but miss to deliver good drivers with it so the Community has to make their own stuff.
You just could think they want to sell you their Hardware but they don't want that you do make good usage of it. :ph34r:
 
fusion_power said:
Hmm. Ok. ^^
But would it be a good Idea to send the PowerVR SGX Company a Pandora and ask them if they could do a good Driver for it? ^^
I just think if the official Drivers would be perfect, we don't even need Open Spurce Drivers. :)
I really wonder why Companies makeing all these nice Chips and Chipsets but miss to deliver good drivers with it so the Community has to make their own stuff.
You just could think they want to sell you their Hardware but they don't want that you do make good usage of it. :ph34r:

From what I hear, ImgTech folks have reference drivers but generally it is up to licensee to integrate/rewrite them to suit their platform of choice ( TI in this case)

For instance, Apple folks are already on their , I think 5th or 6th, major driver version - it took them almost 2 years to get multisampling going, they actually optimize their drivers all the time , sometimes even coming out with new GLes extensions etc ... it is a lot of work.
 
Last edited by a moderator:
while having a binary blob for a 3d driver may not be idea - who says its slow? is there any figures to back this up?
 
I don't think anyone said it's slow except for uninformed people speculating in this thread :p

Having an open driver could also potentially allow for full OpenGL on the Pandora, but that's a software compatibility thing, not a speed thing. An open driver would also mean we wouldn't be beholden to TI and PowerVR for further improvements, but, from a practical perspective, that's a fairly minor point.

Still, I hope this catches on.
 
I don't get the need for OpenGL - GLES is a subset and porting stuff isn't that big a deal - there are even compatibility layers!

The only things really "missing" from GLES are things like glVertex* glNormal* etc - all deprecated calls even in OpenGL - arrays are much faster than 1000's calls to glVertex* and friends...

As for being beholden to TI and PowerVR, well they are kinda beholden to their customers if they want to sell products in the future - all said and done they have to consider a lot of people are *very* interested in Tegra and for all TI know Pandora II could use one of that family not to mention many other vendors designing new products.

I don't subscribe to the casting of TI and PowerVR as being the big evil corporations blackmailing us with their binary blobs, its just the last strangled shout of an outdated paradigm I'd of thought there would be very little to be learnt from an open source driver about the hardware implementation - which is what I think they are worried about, alas its not an uncommon way of thinking
 
Tempel said:
I don't think anyone said it's slow except for uninformed people speculating in this thread :p

Having an open driver could also potentially allow for full OpenGL on the Pandora, but that's a software compatibility thing, not a speed thing. An open driver would also mean we wouldn't be beholden to TI and PowerVR for further improvements, but, from a practical perspective, that's a fairly minor point.

Still, I hope this catches on.

Well right now it's pretty slow, capable of only 0 anything per second ;p It's a pretty valid expectation that if anything does happen it'll have difficulty competing in performance with the companies that have huge budgets for this sort of thing and employ people that have been writing GPU drivers for a long time. This is a pretty niche talent that you don't see a ton of in the wild.
 
Last edited by a moderator:
Alec  said:
Exophase said:
I do want to know what all of this "we should use LLVM to leverage multi-core CPU" stuff is about >_>

You should read up on Gallium3D :)

I know what LLVM is, and I know how it could be used in implementing a shader compiler for Galium - what I don't understand is why you'd want to target CPUs (regardless of thread count) when generating shaders for a GPU that supports them. I think lkcl is mixing up technologies.
 
Last edited by a moderator:
Yeah, Phoronix has a pretty good record of being clueless.

I recall reading that LLVM can be used to parallelize code segments and perform computations through the GPU, at runtime, but I don't think it has much to do with Gallium3D.
 
Back
Top