Gprof


hdonk

Active Member
Joined
Sep 12, 2008
Messages
572
Hi,

I'm trying to get caprice32 up and running on my BeagleBoard under Angstrom, so I can get it on to the Pandora when I get my grubby mits on mine... However I'm only up to 50% emulation rate with unoptimised code (and no audio). So I've tried running the binary after compiling it with -pg, but the resultant gmon.out only has call counts and no timing info. Anybody have any luck with profiling in this environment?

hdonk.
 
I remember when i did it on the gp2x you had to make sure you finished the application normally otherwise you wouldn't get any of the recorded data. And I think you couldnt let gmenu start again.
 
Last edited by a moderator:
Well, thanks for all that feedback - After using the gcc based profiling optimisations, I've gained a speedup of about 10%. Which isn't going to be enough. However after using oprofiling, it turns out about 60% of the time is spent doing the palletised to 8bpp video buffer conversion. I think the best bet at this point is going to be to rewrite the graphics gate array part of caprice straight in OpenGL. :blink:
 
However after using oprofiling, it turns out about 60% of the time is spent doing the palletised to 8bpp video buffer conversion. I think the best bet at this point is going to be to rewrite the graphics gate array part of caprice straight in OpenGL. :blink:
I'm not sure OpenGL will be the way to go: does it have palette texture?
Another way is to update only things that changed since the last frame, that can spare a lot of conversions.
 
Last edited by a moderator:
Lazy updating is good! Which is why I'll have to dig out my copy of the Amstrad hardware manual, remind myself how the graphics subsystem works, figure out how Caprice is emulating it, then reimplement it using good old shaders. I think it should give a significant speed boost, except it's going to take time. Heyho, it's not like my Pandora is arriving tomorrow :lol:
 
Ah, SOFT968. Full of such joys as KL_WIPE_BOTTOM. ;)

Sorry, I could have told you that the framebuffer transform was the slow part - I profiled Caprice32 on my Zaurus a while ago. No decent video hardware in it to pull me out of that pickle, though :(
 
Yup, that's the one. It's in the loft somewhere... May just find a soft copy for now though :) And you've hit on one of the reasons the Pandora is going to be one heck of a platform to have in your pocket - unplumbed power!
 
Not sure if I understood you problem correctly, but:
I guess you will have to compile your sources with -fno-omit-frame pointer, otherwise gprof cannot backtrace to the specific function to tell
you, how long it spend in Function XY.
Maybe this helps

BR
paines
 
The problem seems to be related to whatever timer the profiling code is using for its timestamps. Function call counts seem correct, so I don't believe fno-omit-frame-pointer will help. However as I said earlier, oprofile gave me enough information to decide rewriting the 6845 emulator in opengl is the way to go.

hdonk
 
Back
Top