Pandora 3d Chip Questions When Working?


Yeah, you just get a tainted warning, not a big thing (in fact I guess most people here would see that if they looked at their dmesg output, assuming you're pragmatic about your video card drivers ;))

I wonder how the binary blob is arranged, so that changes in ABI will have no effect on it, or conversely, will changes in the calling ABI cause a binary blob to be useless (or rather useless without an additional translation layer)?
 
play Payback on the GP2X and realise quite a lot can be done in software ;)

although it sucks, a lot more work to write a software-3D renderer, comeon ImgTech! >.>
 
mali said:
May I assume, that you've received a dev-board, after all?

No and I won't get one. But I have a Beagle :)

lardman said:
I wonder how the binary blob is arranged, so that changes in ABI will have no effect on it, or conversely, will changes in the calling ABI cause a binary blob to be useless (or rather useless without an additional translation layer)?

I hope the software toolchain ABI won't change before many years :) The only thing that could potentially change is the support for passing FP values in registers, and I don't think any kernel module uses FP (a quick look at ImgTec kernel module does not show any FP instruction, though that doesn't prove much).
 
Last edited by a moderator:
I'm getting annoyed by all the end-users that expect programmers to optimize stuff that's already fast enough.

Code readability is always more important, especially when you're trying to get a 10ms GUI action down to 1ms or some such nonsense. "Premature optimisations are the root of all evil" is a very wise statement.
 
sindbad said:
I'm getting annoyed by all the end-users that expect programmers to optimize stuff that's already fast enough.

Code readability is always more important, especially when you're trying to get a 10ms GUI action down to 1ms or some such nonsense. "Premature optimisations are the root of all evil" is a very wise statement.

I agree, but what would bother me more are lazy developers that could end saying it runs fast enough with your chip overclocked to 900 MHz :lol:
As has already been said above, finding the right point between full optimization and good enough speed is the way to go, and obviously just recompiling something is often not the right point.
 
Last edited by a moderator:
Username said:
fusion_power said:
True, but I think the point is: Hardware Acceleration->less MHz needed->less power consumption->longer Battery life->longer fun. ;)
So why using 500MHz in Software Mode, if we can have the same speed with 200MHz (or so) with Hardware Acceleration? ^_^
I hope the overkill of specs doesn't make coders lazy and leave low-key stuff not fully optimized.

Sadly, this is the situation on PC's today. Latest "joke" was GTA4 for PC, even Quad Core CPU's and GeForce 260 bareley reach 30FPS :lol:

I really hope, that coders don't forget, the Pandora is a BATTERY POWERED MOBILE HANDHELD DEVICE and there is no need to run a Pacman-Game at 500MHz. ^_^
If I would be a coder, I would say "Software optimizing is fun!" but I'm not a coder so I can't say this.
 
Last edited by a moderator:
fusion_power said:
Sadly, this is the situation on PC's today. Latest "joke" was GTA4 for PC, even Quad Core CPU's and GeForce 260 bareley reach 30FPS :lol:
Thats not the fault of lazy programming per se, there code was optimized, just not for the PC. Take a look at a lot of those PC/Console titles, and the same effect will be noticeable. Even more so with tipple cross development ( PC, Xbox360, PS3 ). O, fyi, your still better off with a 3Ghz Dual core, then a 2.5Ghz Quad core for gaming. Very few use more then 2 cores.

Whats needed is a proper foundation where to build upon. There is always a tradeoff between time spend optimizing, or time spend add things. If a optimization takes you 3 hour, for only a 5% speed increase, that may not be cumulative. Thats not so good. Spend 3 hours in something that makes your code from 170ms rendering to 70 ms, thats worth it ( even more so when those routines may end up used more then one's in sequence ). 100ms + 100ms + ... thats going to be noticeable ;)

Optimizing big problems after the program is finished, or largely operational is most of the time bit people in the ass. Your optimizations can break things ( some that you may not even notice during your tests ). When you develop, and you see a function using way to much resources, or cycles, stop, analyze it, can it be improved, if so, do it. Don't add a "Todo: Optimize in the future". 90% chance its never going to happen.
 
Last edited by a moderator:
javaJake said:
Yes, it does require a particular flag to be set within the binary (which is illegal to set without certain requirements, etc.). If you don't do exactly what the authors of the kernel want, you're in trouble (sound familiar?*) and your module is "tainted".



No it doesn't sound "familiar" to microsoft in the slightest. You seem to be indicating that this kind of concern on the part of the kernel developers is draconian in some way. Instead, it's a requirement to clarify that support/problems/etc. that arise from using it aren't the responsibility of the kernel developers. If you want to make a binary that no one can see the code for, it's your responsibility to fix the problems (even if they aren't really yours) since no one else can see where the issue lies.

It's also something that affects only a small subset of developers who insist on writing closed code for a predominately open environment. This is unlike microsoft's licensing agreements which affect every single user out there.

It's the difference between using restrictions on a small group in order to ensure the maximum amount of freedom for everyone and using restrictions on everyone to ensure a small group is able to make maximum profit.
 
Last edited by a moderator:
craigix said:
I've just been told a GPL driver is about to be released (before Christmas).
cool! does it mean they are open source now, too?
 
Last edited by a moderator:
Excellent news! - for a moment there I thought it was a GPL OpenGL driver.
 
craigix said:
I've just been told a GPL driver is about to be released (before Christmas).

What GPL driver? Are you sure about that? That could be impressive news!

Do you mean it will appear a fully featured driver for both 2D and 3D that is Open Source? GPL means Free Software, so it's Open Source too. If not, then you explained it wrong and making some of us happy and then sad.

If it's a GPL driver, I hope they use Gallium3D soon.
 
Last edited by a moderator:
I believe he means a GPL (Open source) driver that relies on a binary blob. Ie, the kernel driver itself is open source, but the confidential/trade secret parts are closed source. The last bit doesn't matter however, as you only need the GPL source to keep the driver upto date with newer kernel releases.

Then of course you have the OGL libraries which interface to the kernel module.

The 2D driver has always been open-source.
 
Back
Top