Did you do some benchmarks? The SGX gpu vs the c64 software rasterizer?
not yet. but I would be very surprised if the SGX in the Pandora is faster at e.g. alphablended blits than the C64
The big performance problem with alphablending in a traditional software rasterizer is the cost of the loading from the framebuffer. Loading a cache line from main RAM takes on the order of 200+ns (200+ cycles @ 1GHz) even on a DM3730 unit (on OMAP3530 it's worse). So unless your framebuffer stays resident in L2 cache, which won't be the case if you're rendering 800x480x16bpp double buffered, or your blits naturally have very good locality of reference (which is hard if you don't at least swizzle the framebuffer) you're going to have problems. The easiest solution is to tile a scene's worth of blit commands. SGX does this in hardware, so it always reads from tile memory and never from the framebuffer (assuming you start with a command to clear the framebuffer). You'd to do this yourself on a software blitter to get performance that's even close. And to get the same performance they'll have to be resident in L1, L2 alone won't suffice - but the smaller you make the tiles the more overhead there'll be binning the tiles. SGX alleviates some of this overhead because it does binning in a parallel hardware unit (but it still adds memory overhead). You can at least make the tiles bigger on the DSP than the CPU, since it has more L1 cache, especially on OMAP3530.
Taking framebuffer access out of the equation and looking at the actual blend computations, the CPU (w/NEON) and DSP have the advantage of having much higher clock speeds (110MHz for SGX on OMAP3530, 200MHz on DM3730) but have some disadvantages. For example, the SGX can in parallel convert pixels from 16bpp to the 32bpp internal tile format, so if you're using 16bpp there's less overhead since the CPU/DSP have to do this in software w/o very specialized instructions for it. It also does dithering back to 16bpp for free. The blend itself is probably handled as one instruction on SGX, operating on one pixel (4x8-bit SIMD) per USSE pipe (which it has two of). On the CPU, once you have the color components + alpha in 8-bit packed form, you have to do the following separate instructions for each channel: subtract, multiply long, shift narrow, and add. The subtract and add can be done on 16 pixels per cycle while the multiply long and shift narrow can be done on 8 pixels per cycle, so let's say it's 9 cycles for 8 pixels. There's also loading and storing which can to some extent be dual-issued with the computations.
For the DSP it looks like to blend one pixel channel you have to do SUB4, MPYU4, PACKH4 (I think?) and ADD4. Three of those instructions need .L (and the other .M) so there's a lot of contention, requiring 3 cycles, so also 9 cycles for 8 pixels (both halves of the DSP used). You could do the loads and stores in the other cycles, and maybe use it for the packing/unpacking needed to get things in the right format, I don't really know. Or there could be a better way to do it altogether.
It'd be an interesting comparison to find out which is really the best, I guess.