[announce] c64_tools (DSP loader and IPC)


Hi _wb_, are the "Actual Power" measurements reliable ? I used those as the basis for my test conclusions. I figured that the other numbers are mere guesses/extrapolations. I benchmarked with display + wifi on because that's a real-world scenario (this is a graphical/visual benchmark, after all) ppl. probably have their WIFI on most of the time. Last but not least, WIFI+display should just offset the benchmark figures. Your system info utility is pretty cool, btw. I ran the benchmarks with sysinfo showing the "About" page (i.e. no UI updates).

p.s.: the power consumption spikes that can be seen in the diagrams above during idle time are me switching between the sysinfo window and the console (typing some cmdlines)
The "Actual Power" measurements are computed simply by multiplying the reported current by the reported voltage. This should be pretty accurate. The only caveat is that it is taking these measurements at the moment when the line to the log file gets written, so it will use the reported values at the end of each interval, not the average over the entire interval. With a 1 second interval the difference should be negligible.

I don't keep wifi on either - it uses little power when not used, but I even disable my power led just to save some extra power :)

If you would use the DSP for a music player, the scenario with display off is not that far-fetched.
 
Is there a way to do power saving when the DSP is not being used? Or at least to turn it back off? It consumes about 200mW on my unit, which is quite a lot - enough to reduce the lid-closed standby time by a factor of 3 or so.

That said, I ran that fractal benchmark on my unit and got similar impressive results. Faster and less power.
 
That's the usual trouble with such measurements - the instant you start measuring things you've already skewed the results ;)

I don't think that using the DSP for MP3 decoding is worth it. Just did a quick check, underclocked the ARM to ~200 Mhz, played an MP3 (~192 kbps VBR) in DeaDBeeF, turned off WIFI and the display and let sysinfo produce a log. Power consumption was almost exactly 0.5 W/h, i.e. the Pandora could play MP3s for ~30 hours straight. That's good enough for me.

From what I've seen/measured so far, my conclusion is that the DSP is more power-efficient for heavy tasks, and the GPP is better at power-saving / light tasks (like MP3 decoding).

<anecdote>
I tried to convince my boss to permit the release of a binary-only version of our C64+ graphics rasterizer, licensed exclusively for use on the Pandora (not OMAP3 in general). Our sales/marketing guy did not have any objections and thought that would be a cool thing to do. My boss was reluctant, though. He feared that ppl (outside the Pandora community) would pirate the software / use it w/o licensing it. Well, what can I say, it is a possibility but then again there are other ways to do that (license it, report wrong number of units shipped or claim that it was evaluated but not used in the final product although it actually was, ..). Anyways, at first he made fun of the Pandora given its price tag and outdated HW specs. When I told him that this essentially is a small notebook that, when attached to USB keyboard/mouse/HD-gfxcard, can be used for the usual office tasks and even runs the same office suite we use on our desktop-PCs (Libre Office), he seemed convinced ;) (at least ATM, this does not seem possible on smartphones, which are designed for consumers, not producers, anyway)
</anecdote>


@_wb_: As I said, I'm aware of that problem (DSP powerdown not working correctly) and I'll try to find a solution for that. btw: how can the LEDs be disabled ?
 
Did you do some benchmarks? The SGX gpu vs the c64 software rasterizer?
 
I agree that using the DSP for mp3 decoding is not worth it. In audacious, clocked at 300MHz, it takes about 400mW. With the DSP, in the best case you might be able to reach 300mW. The difference is not worth the trouble.

2D blitters on the DSP would be cool though (with alpha blending, maybe rotation/zoom and some effects too). Many games are more or less bottlenecked by blitting. I can also imagine that some emulators could get some extra oomph if they can offload the blitting to the DSP.

There's a menu option somewhere in the settings called LED settings, which allows you to disable the leds or adjust their strength.
 
I'm uploading a video about your test bsp.

http://youtu.be/myyuSbemoyQ

Very good work :)

Tested using my CC Pandora @600Mhz
 
I agree that using the DSP for mp3 decoding is not worth it. In audacious, clocked at 300MHz, it takes about 400mW. With the DSP, in the best case you might be able to reach 300mW. The difference is not worth the trouble.
Perhaps not in terms of power consumption, but if you can shave 10-20% CPU usage in a game by moving music playback to the DSP, that's actually pretty huge savings.
 
so ... could it be worth the effort adding dsp support to SDL for drawing? and also SDL_mixer well for mixing?
 
Did you do some benchmarks? The SGX gpu vs the c64 software rasterizer?
not yet. but I would be very surprised if the SGX in the Pandora is faster at e.g. alphablended blits than the C64 :)

@Farox: very cool!  (and heh, I know that track. is not that from the C=64 game "the human race" ? :D :D)

@_wb_: it should be relatively straight forward (at least on the DSP side). integrating it into a video player is probably the hardest part (?)

@WizardStan: yes, of course. If the GPP is already under heavy load, it will always be of advantage to move MP3 decoding to the DSP

@crow_riot: yes, I do think so. for performance reasons, blits for example should be grouped (collect blit calls in a shared memory displaylist, then let the DSP process that at application-defined points in time)

..and I am glad to report that I finally managed to fix the DSP powerdown sequence.. (it did not work as described in the TI OMAP3530 ref. manual but I found a CCS GEL file at ti.com that has the proper sequence)

after the c64 kernel module has been unloaded, power consumption is now back to normal. yay!
 
Last edited by a moderator:
@Farox: very cool! (and heh, I know that track. is not that from the C=64 game "the human race" ? :D :D)
Thanks and yes the music is from this (great  :) ) man

http://www.youtube.com/watch?v=LZVxBiiKJms

, the remix used in my video is available at AmigaRemix.com
 
Last edited by a moderator:
Did you do some benchmarks? The SGX gpu vs the c64 software rasterizer?
 not yet. but I would be very surprised if the SGX in the Pandora is faster at e.g. alphablended blits than the C64 :)
The big performance problem with alphablending in a traditional software rasterizer is the cost of the loading from the framebuffer. Loading a cache line from main RAM takes on the order of 200+ns (200+ cycles @ 1GHz) even on a DM3730 unit (on OMAP3530 it's worse). So unless your framebuffer stays resident in L2 cache, which won't be the case if you're rendering 800x480x16bpp double buffered, or your blits naturally have very good locality of reference (which is hard if you don't at least swizzle the framebuffer) you're going to have problems. The easiest solution is to tile a scene's worth of blit commands. SGX does this in hardware, so it always reads from tile memory and never from the framebuffer (assuming you start with a command to clear the framebuffer). You'd to do this yourself on a software blitter to get performance that's even close. And to get the same performance they'll have to be resident in L1, L2 alone won't suffice - but the smaller you make the tiles the more overhead there'll be binning the tiles. SGX alleviates some of this overhead because it does binning in a parallel hardware unit (but it still adds memory overhead). You can at least make the tiles bigger on the DSP than the CPU, since it has more L1 cache, especially on OMAP3530.

Taking framebuffer access out of the equation and looking at the actual blend computations, the CPU (w/NEON) and DSP have the advantage of having much higher clock speeds (110MHz for SGX on OMAP3530, 200MHz on DM3730) but have some disadvantages. For example, the SGX can in parallel convert pixels from 16bpp to the 32bpp internal tile format, so if you're using 16bpp there's less overhead since the CPU/DSP have to do this in software w/o very specialized instructions for it. It also does dithering back to 16bpp for free. The blend itself is probably handled as one instruction on SGX, operating on one pixel (4x8-bit SIMD) per USSE pipe (which it has two of). On the CPU, once you have the color components + alpha in 8-bit packed form, you have to do the following separate instructions for each channel: subtract, multiply long, shift narrow, and add. The subtract and add can be done on 16 pixels per cycle while the multiply long and shift narrow can be done on 8 pixels per cycle, so let's say it's 9 cycles for 8 pixels. There's also loading and storing which can to some extent be dual-issued with the computations.

For the DSP it looks like to blend one pixel channel you have to do SUB4, MPYU4, PACKH4 (I think?) and ADD4. Three of those instructions need .L (and the other .M) so there's a lot of contention, requiring 3 cycles, so also 9 cycles for 8 pixels (both halves of the DSP used). You could do the loads and stores in the other cycles, and maybe use it for the packing/unpacking needed to get things in the right format, I don't really know. Or there could be a better way to do it altogether.

It'd be an interesting comparison to find out which is really the best, I guess.
 
@Farox: Yes, he's one of my all-time heroes :) (do you know this site http://www.6581-8580.com/ ? It has MP3 encodings for _a lot_ of SID tunes, captured from the original machines. A similar one also exists for Amiga music)

@Exophase: To get good speed / hide memory latencies, there's a programming trick on the C64 (actually, that's standard procedure on TI DSPs, not really a secret. It has already been described on e.g. StackOverflow): For fast graphics blitters, you need to set up 3 scanline buffers in L1DSRAM and two simultaneous DMA transfers. One DMA transfer streams the next source texture scanline into buffer A, the other DMA writes the last scanline buffer B processed by the DSP to the framebuffer. The third scanline buffer C is processed by the DSP (in parallel to the DMA transfers, of course). For triangle texture mapping, swizzling is mandatory or the performance will drop by factor 10-12. The SGX will probably be faster at that task, anyway. The instructions you mentioned are correct, i.e. you have to process multiple pixels with one assembly instruction (..SIMD..). It sure will be interesting to compare the DSP blit-performance with the GPU. On the Pandora it should be possible to process an RGB565->RGB565 / constant alpha blitloop at ~87 mpixel/sec (at least) (we achieved ~80 mpixel/sec on a system with a slower DSP and slower RAM).
 
@Exophase: To get good speed / hide memory latencies, there's a programming trick on the C64 (actually, that's standard procedure on TI DSPs, not really a secret. It has already been described on e.g. StackOverflow): For fast graphics blitters, you need to set up 3 scanline buffers in L1DSRAM and two simultaneous DMA transfers. One DMA transfer streams the next source texture scanline into buffer A, the other DMA writes the last scanline buffer B processed by the DSP to the framebuffer. The third scanline buffer C is processed by the DSP (in parallel to the DMA transfers, of course). For triangle texture mapping, swizzling is mandatory or the performance will drop by factor 10-12. The SGX will probably be faster at that task, anyway. The instructions you mentioned are correct, i.e. you have to process multiple pixels with one assembly instruction (..SIMD..). It sure will be interesting to compare the DSP blit-performance with the GPU. On the Pandora it should be possible to process an RGB565->RGB565 / constant alpha blitloop at ~87 mpixel/sec (at least) (we achieved ~80 mpixel/sec on a system with a slower DSP and slower RAM).
What you describe is clearly tiling :p At a single scanline level no less, which is kind of intense (I'm sure there's enough room to do more scanlines??) It's cool that you can totally hide the framebuffer writeback like the SGX would, though. You may be able to sort of do something like that on the CPU using the pre-load engine with L2, but it'd be difficult to really operate reliably.
 
Last edited by a moderator:
Yep, it's really fast. And yea, you can say that the SGX' tile based rendering approach is similar in that regard. There's plenty of room for more scanlines but in practice 6k (of 48k available) are enough (i.e. using more does not increase the performance any further).

TI's MPEG4 decoder for example (probably) works in a similar way (haven't seen the source of it but it needs L1DSRAM and DMA channels/TCCs, too).

IIRC the ARM has SRAM, too (and DMA's), but I have never used that. I am not even sure if the OS allows you to. In any case, not worth the try since the DSP will be faster, anyway.

I'll add DMA utility functions to the DSP "firmware" soon, so if you can spare some time, go ahead and write some fast blit loops! (as I said, I cannot do that because that would interfer with my job). The DMA functions will be useful for e.g. mixing audio streams, too (same principle).
 
BAM! Sent a message "add these two numbers", got a response with the right sum. Easy as pie. Now just to figure out something useful to do.
 
A multiple format music player that uses L and R to skip tracks onwards and backwards through a playlist.

Even if it seems not that useful, at least it was the very first idea about dsp, a while ago.
 
@WizardStan: nice, sounds like the toolchain is working!

@Linux-SWAT: a whole player would not be a suitable task for the DSP (SD card access, ..), just the codecs are.

plot-20130929-005049_dspidle.png


I just did some finetuning regarding the DSP power consumption.

The system now consumes, according to sysinfo, ~218mA (873mW/h) when the DSP is booted and idling around.

I guess that's quite ok.

With the lid closed and WIFI disabled (and the DSP running), the system consumes ~0.35 W/h (see gnuplot chart above).

And heh, when / if you write C64+ inline assembly, let me tell you that it is a bad idea to write

   asm("idle")

instead of

   asm(" idle")

since the former version simply defines a label :D (but that was not the only issue)
 
Last edited by a moderator:
80 Megapixel are 6 layers of 800*480 pixels with over 34 frames per second. This is very nice.
 
Back
Top