ZXDunny
Deep avatar
- Joined
- Oct 12, 2010
- Messages
- 2,585
Long post alert
Ok, as you all probably know by now my pet project is SpecBAS, the BASIC interpreter that exists on the repo as PandaBAS.
Due to the rather poor performance of the ARM CPU regarding floats, it's quite slow. However, while updating the code I've run into a rather interesting (and previously unconsidered) slow-down - the display rendering.
I recently updated SpecBAS with an aim to provide handling of 32bit graphics. To this end, the original 8bit rendering surface (the part the user sees) had to be converted to 32bit and my compositor was altered to render the current graphics system to that. A little background then:
SpecBAS provides a drawing surface (the screen) upon which the user can render text and graphics. It also provides windows, which are themselves plain graphical surfaces. Whenever a frame is rendered, the screen and all its windows are blitted to the SDL surface that is declared when SpecBAS is run. I've limited drawing to only the rectangles of the changes that occur, and obviously only to windows that are visible. Previously I copied the display data to the SDL surface 4 pixels at a time (using longword pointers to transfer) but now of course with a 32bit display target I have 4 times the amount of data to move....
As a test, I ran a small program which plot pixels to the display - a plasma cloud function called recursively in a diamond-square algorithm to a 512x400 image. The results were interesting:
1. With the old renderer to an 8bit target, blitting longwords - 102 seconds
2. New renderer blitting one pixel at a time - 148 seconds.
Clearly unacceptable and the whole thing feels quite sluggish. I messed around with instruction ordering, unrolled some loops and stored common expressions in variables and got it down to:
3. Newer renderer - 128 seconds.
Not too far behind the old renderer - only 0.79 times slower. For a laugh I disabled the compositor and ran the test again, to gauge how much time PandaBAS spends in there... And got 57 seconds.
So almost half of the time PandaBAS spends running is drawing the display. This isn't a problem on x86 CPUs these days, as the compositor runs in a separate thread and therefore (because I'm careful to set affinity properly) gets a core to itself, and doesn't interfere with the interpreter.
The Pandora only has one core - so time spent rendering the display is time that is unavailable to execute BASIC code.
How do I get around this?
Can I use GLES to do the heavy lifting? All my windows are stored as a 256-entry palette of RGBA values followed by a plain memory region that represents their surfaces, 8bpp. Can GLES be used in FPC/Lazarus?
Can the DSP help? Is it even possible to program the DSP in Lazarus?
Both of the above options would be a fire-and-forget strategy - I tell it to render the display and then go off and continue interpreting; it doesn't matter if the memory regions that hold the display and windows get altered during a render.
Given that I also target the Raspberry PI, would either of the above be of any use at all?
Or do I need to optimise my compositor further to try and get performance up as far as possible?
Any thoughts, anyone?
Cheers,
D.
Ok, as you all probably know by now my pet project is SpecBAS, the BASIC interpreter that exists on the repo as PandaBAS.
Due to the rather poor performance of the ARM CPU regarding floats, it's quite slow. However, while updating the code I've run into a rather interesting (and previously unconsidered) slow-down - the display rendering.
I recently updated SpecBAS with an aim to provide handling of 32bit graphics. To this end, the original 8bit rendering surface (the part the user sees) had to be converted to 32bit and my compositor was altered to render the current graphics system to that. A little background then:
SpecBAS provides a drawing surface (the screen) upon which the user can render text and graphics. It also provides windows, which are themselves plain graphical surfaces. Whenever a frame is rendered, the screen and all its windows are blitted to the SDL surface that is declared when SpecBAS is run. I've limited drawing to only the rectangles of the changes that occur, and obviously only to windows that are visible. Previously I copied the display data to the SDL surface 4 pixels at a time (using longword pointers to transfer) but now of course with a 32bit display target I have 4 times the amount of data to move....
As a test, I ran a small program which plot pixels to the display - a plasma cloud function called recursively in a diamond-square algorithm to a 512x400 image. The results were interesting:
1. With the old renderer to an 8bit target, blitting longwords - 102 seconds
2. New renderer blitting one pixel at a time - 148 seconds.
Clearly unacceptable and the whole thing feels quite sluggish. I messed around with instruction ordering, unrolled some loops and stored common expressions in variables and got it down to:
3. Newer renderer - 128 seconds.
Not too far behind the old renderer - only 0.79 times slower. For a laugh I disabled the compositor and ran the test again, to gauge how much time PandaBAS spends in there... And got 57 seconds.
So almost half of the time PandaBAS spends running is drawing the display. This isn't a problem on x86 CPUs these days, as the compositor runs in a separate thread and therefore (because I'm careful to set affinity properly) gets a core to itself, and doesn't interfere with the interpreter.
The Pandora only has one core - so time spent rendering the display is time that is unavailable to execute BASIC code.
How do I get around this?
Can I use GLES to do the heavy lifting? All my windows are stored as a 256-entry palette of RGBA values followed by a plain memory region that represents their surfaces, 8bpp. Can GLES be used in FPC/Lazarus?
Can the DSP help? Is it even possible to program the DSP in Lazarus?
Both of the above options would be a fire-and-forget strategy - I tell it to render the display and then go off and continue interpreting; it doesn't matter if the memory regions that hold the display and windows get altered during a render.
Given that I also target the Raspberry PI, would either of the above be of any use at all?
Or do I need to optimise my compositor further to try and get performance up as far as possible?
Any thoughts, anyone?
Cheers,
D.