Running scalers on DSP


By CPU you mean DSP, right?
Yes.
At least in the last version ("scaling from L1DSRAM to L1DSRAM"), the DMA priority shouldn't matter since the RAM is only accessed by the DMA, not the DSP.

If you want to try different DMA priorities, take a look at libc64_dsp/src/dma.c, around line 133. Changing the read rate (a few lines down) might also be interesting in case you worry about bus cycles.
I'll try to test changing priority and read rate, if it makes any difference.
Earlier this evening I had a quick look at (one of the) 2xSAI implementations. It really contains a lot of branches so memory throughput most probably is not the bottleneck of that algorithm. The memory bandwidth used by the L1DSRAM-to-L1DSRAM version of your scaler is ~29.7 MB/sec for the reads, plus ~118.8 MB/sec for the writes (the highest write throughput for the DMA I measured so far is ~993 MB/sec (SRAM to mem. copy), resp. ~530 MB/sec (copy from RAM to SRAM via DMA, process in SRAM via DSP, then copy from SRAM back to RAM via DMA -- but this was with a very DSP-friendly, SP-looped routine)
Memory throughput may not be the bottleneck, but L1DSRAM accesses are faster than shared memory accesses. In Scale2x, the difference is 3 ms. So I think there should be bigger difference in 2xSaI.
In post #19 you said that the ARM version, which interestingly was faster than the NEON version, takes 6.6ms, so 5.17ms for the DSP version is still quite respectable, especially because of all the conditionals / branches.
The algorithm speed depends on the source image (due to the branches) so the times may differ for other source images.In the DSP version I tried removing the branches (only one if remained in the innermost loop), which is one of the reasons for the speed.

Here's the innermost loop, if you're interested:

Code:
colorI = colorF;
colorE = colorJ;
colorJ = _amem4_const(src1);
src1+=2;
colorF = _packlh2(colorJ, colorE);

colorG = colorB;
colorA = colorK;
colorK = _amem4_const(src2);
src2+=2;
colorB = _packlh2(colorK, colorA);

colorH = colorD;
colorC = colorL;
colorL = _amem4_const(src3);
src3+=2;
colorD = _packlh2(colorL, colorC);

colorM = colorO;
colorO = _mem4_const(src4);
src4+=2;
colorN = _packlh2(colorO, colorM);


Mask1 = _xpnd2( ( _cmpeq2(colorA, colorD) & (~_cmpeq2(colorB, colorC)) &   _cmpeq2(colorA, colorE)  & _cmpeq2(colorB, colorL) ) |
                ( _cmpeq2(colorA, colorC) &   _cmpeq2(colorA, colorF)  & (~_cmpeq2(colorB, colorE)) & _cmpeq2(colorB, colorJ) ) );

Mask2 = _xpnd2( ( _cmpeq2(colorB, colorC) & (~_cmpeq2(colorA, colorD)) &   _cmpeq2(colorB, colorF)  & _cmpeq2(colorA, colorH) ) |
                ( _cmpeq2(colorB, colorE) &   _cmpeq2(colorB, colorD)  & (~_cmpeq2(colorA, colorF)) & _cmpeq2(colorA, colorI) ) );

product = (colorA & Mask1) | (colorB & Mask2) | (INTERPOLATE(colorA, colorB) & ~(Mask1 | Mask2));

_amem8(dst1) = _dpack2(product, colorA);
dst1+=4;


Mask1 = _xpnd2( ( _cmpeq2(colorA, colorD) & (~_cmpeq2(colorB, colorC)) &   _cmpeq2(colorA, colorG)  & _cmpeq2(colorC, colorO) ) |
                ( _cmpeq2(colorA, colorB) &   _cmpeq2(colorA, colorH)  & (~_cmpeq2(colorG, colorC)) & _cmpeq2(colorC, colorM) ) );

Mask2 = _xpnd2( ( _cmpeq2(colorB, colorC) & (~_cmpeq2(colorA, colorD)) &   _cmpeq2(colorC, colorH)  & _cmpeq2(colorA, colorF) ) |
                ( _cmpeq2(colorC, colorG) &   _cmpeq2(colorC, colorD)  & (~_cmpeq2(colorA, colorH)) & _cmpeq2(colorA, colorI) ) );

product1 = (colorA & Mask1) | (colorC & Mask2) | (INTERPOLATE(colorA, colorC) & ~(Mask1 | Mask2));


Mask1 = _xpnd2( ( _cmpeq2(colorA, colorD) & (~_cmpeq2(colorB, colorC)) ) |
                ( _cmpeq2(colorA, colorD) &   _cmpeq2(colorB, colorC)  & _cmpeq2(colorA, colorB) ) );

Mask2 = _xpnd2( ( _cmpeq2(colorB, colorC) & (~_cmpeq2(colorA, colorD)) ) |
                ( _cmpeq2(colorB, colorC) &   _cmpeq2(colorA, colorD)  & _cmpeq2(colorA, colorB) ) );


Mask3 = _xpnd2( _cmpeq2(colorA, colorD) & _cmpeq2(colorB, colorC) & (~_cmpeq2(colorA, colorB)) );

if ( Mask3 )
{
    r =          _xpnd2( _cmpeq2(colorA, colorG) & _cmpeq2(colorA, colorE) );
    r = _sub2(r, _xpnd2( _cmpeq2(colorB, colorG) & _cmpeq2(colorB, colorE) & (~_cmpeq2(colorA, colorG)) & (~_cmpeq2(colorA, colorE)) ) );

    r = _sub2(r, _xpnd2( _cmpeq2(colorB, colorK) & _cmpeq2(colorB, colorF) ) );
    r = _add2(r, _xpnd2( _cmpeq2(colorA, colorK) & _cmpeq2(colorA, colorF) & (~_cmpeq2(colorB, colorK)) & (~_cmpeq2(colorB, colorF)) ) );

    r = _sub2(r, _xpnd2( _cmpeq2(colorB, colorH) & _cmpeq2(colorB, colorN) ) );
    r = _add2(r, _xpnd2( _cmpeq2(colorA, colorH) & _cmpeq2(colorA, colorN) & (~_cmpeq2(colorB, colorH)) & (~_cmpeq2(colorB, colorN)) ) );

    r = _add2(r, _xpnd2( _cmpeq2(colorA, colorL) & _cmpeq2(colorA, colorO) ) );
    r = _sub2(r, _xpnd2( _cmpeq2(colorB, colorL) & _cmpeq2(colorB, colorO) & (~_cmpeq2(colorA, colorL)) & (~_cmpeq2(colorA, colorO)) ) );

    Mask1 |= _xpnd2( _cmpgt2(r, 0) ) & Mask3;
    Mask2 |= _xpnd2( _cmpgt2(0, r) ) & Mask3;
}

product2 = (colorA & Mask1) | (colorB & Mask2) | (Q_INTERPOLATE(colorA, colorB, colorC, colorD) & ~(Mask1 | Mask2));

_amem8(dst2) = _dpack2(product2, product1);
dst2+=4;
 
I thought of another thing I want to test.

When I'm using both DMA tranfers (from source buffer and to destination buffer) I'm running both transfers in parallel. But if I understand things correctly, it should be possible to link the transfers, so I can start the first transfers and when it finishes it automatically starts the second transfer.

Have you tried this or should I read the sources to see how to do it ?
 
I found the examples of linking DMA transfers in your DMA tests.

You have bugs there - you are setting DMA transfers in QDMA_1, but you're waiting for DMA transfers in QDMA_0.

___

The results with linked DMA transfers:

Scale2x:

1.10 ms (using standard call)

2xSaI:

6.05 ms (using standard call)

I also tested Scale2x with the source having 256 colors (8-bit palette).

scaling from source buffer to destination buffer (no DMA transfers):

4.25 ms (using standard call)

using DMA to transfer data from source buffer to L1DSRAM and scaling from L1DSRAM to destination buffer (DMA transfer is realized parallel to the scaling):

3.98 ms (using standard call)

scaling from source buffer to L1DSRAM and using DMA to transfer data to destination buffer (DMA transfer is realized parallel to the scaling):

1.60 ms (using standard call)

using DMA to transfer data from source buffer to L1DSRAM, scaling from L1DSRAM to L1DSRAM and using DMA to transfer data to destination buffer (DMA transfers are realized parallel to the scaling) - using parallel DMA transfers:

1.92 ms (using standard call)

using DMA to transfer data from source buffer to L1DSRAM, scaling from L1DSRAM to L1DSRAM and using DMA to transfer data to destination buffer (DMA transfers are realized parallel to the scaling) - using linked DMA transfers:

1.20 ms (1.05 ms using fastcall)

and for comparison my arm/neon scaler:

2.01 ms
 
This is at what clock speed, CC's default 430MHz? And what was the ARM clock speed? Also do you remember the ARM's clock for 6.6ms 2xSaI figure from 2 pages back?
 
Nice work!


Is there any risk that in the case where you're paralellising your DMA transfers, and say you're doing a scale from something small to something large and at a non-integer ratio, that the DMA back to the buffer could start catching up the scaling code and end up overtaking it?
 
Last edited by a moderator:
Is it possible to use this scaler while the cpu is busy with something else? Like use this scalers for drastic? When the image is rendered, give the address to the dsp, let it handle the scaling and start working on the next frame. The next frame would not be ready in the 6 ms the dsp needs, as far as I guess.
 
Last edited by a moderator:
@M-HT: Thanks for reporting the typo. I didn't notice since the test only transfers a few bytes so there's probably not much to wait for.

Your test results regarding linked transfers are quite interesting.

Originally I added them since I had a usecase where I needed to gather more memory fragments into SRAM than there are QDMA channels available.

According to your tests, linked transfers are a lot faster than using multiple QDMA channels. I will certainly try this with my sprite engine.

@rohezal: Yes, that should be possible.
 
This is at what clock speed, CC's default 430MHz? And what was the ARM clock speed? Also do you remember the ARM's clock for 6.6ms 2xSaI figure from 2 pages back?
I'm doing all my tests at default speeds. That should be 430 MHz for the DSP and 600 MHz for the ARM.
Nice work!

Is there any risk that in the case where you're paralellising your DMA transfers, and say you're doing a scale from something small to something large and at a non-integer ratio, that the DMA back to the buffer could start catching up the scaling code and end up overtaking it?
No. Multiple independent buffers are used. For instance, you set the DMA to transfer data from first buffer (in the background) and you are writing data to second buffer. Then you wait for the DMA transfer to finish (if it hadn't) and switch the buffers - meaning that you set the DMA to transfer data from second buffer and you are writing data to first buffer. And so on.As for parallelizing the DMA transfers, I'm just setting up multiple DMA transfers at on time - one to transfer data from source buffer to temporary working source buffer and one to transfer data from temporary working destination buffer to destination buffer.

Your test results regarding linked transfers are quite interesting.

Originally I added them since I had a usecase where I needed to gather more memory fragments into SRAM than there are QDMA channels available.

According to your tests, linked transfers are a lot faster than using multiple QDMA channels. I will certainly try this with my sprite engine.
The linked transfers are faster when using Scale2x on 8-bit source. When using Scale2x on 16-bit source, they are a bit slower than parallel transfers. And when using 2xSaI they were a lot slower, but with 2xSaI it's probably better to not use the DMA transfers at all. It's less hassle and only slightly slower.
That means that every use case should be tested to see which is faster - parallel transfers or linked transfers.
 
Multiple independent buffers are used. For instance, you set the DMA to transfer data from first buffer (in the background) and you are writing data to second buffer. Then you wait for the DMA transfer to finish (if it hadn't) and switch the buffers - meaning that you set the DMA to transfer data from second buffer and you are writing data to first buffer. And so on.


As for parallelizing the DMA transfers, I'm just setting up multiple DMA transfers at on time - one to transfer data from source buffer to temporary working source buffer and one to transfer data from temporary working destination buffer to destination buffer.
So the DSP transfer back can't start until both the scale and the initial DSP transfer to have completed? That's a little wasteful I guess, but a fair tradeoff in avoiding nasty defects in corner cases.
 
forgive my ignorance - what fraction of cpu time a typical game/emulator uses for the scaler??
 
The Pandora screens uses 16 color Bits per pixel, right?
 
Last edited by a moderator:
The default config for X on the default firmware uses 16 bit color, I don't really know why (to save on memory maybe?). The screen itself is perfectly capable of displaying 24 bit color. The easiest way to get access to full color is by using notaz' SDL to use the framebuffer directly.
 
Hi,

For those (like myself) who are not very knowledgeable about scalers, here is a link to a wikipedia article which explains the subject quite well IMHO.

Bye, Magic Sam
 
The default config for X on the default firmware uses 16 bit color, I don't really know why
When you scale on with more bits, you have to more work. With 16 bit, you have to scale with 2 bytes, with 24 bit you have to scale with 3 bytes. Not sure what the dsp hardware is capable of (hardware mult16 mult24 mult32)? But at least the micro controllers i know often just have hardware 16bit multipliers, so doing 24 or  32 bit takes way longer.
 
@bsp:

Here is the current state of my scalers - Scale2x, 2xSaI, Eagle2x.

Some notes on parameters:

width and height are in pixels

source and destination strides are in bytes

palette should be in L1DSRAM

source buffer, destination buffer and palette are aligned to 8 bytes

width >= 32

height >= 5

width, source stride and destination stride are divisible by 8

no checking is performed

no cache invalidation/writeback is done

dma is not initialized(it should be initialized before calling the functions)
scalers_dsp.zip
 

Attachments

  • scalers_dsp.zip
    7.8 KB · Views: 416
After reworking the hq2x algorithm I have following results (on original Pandora):

16-bit source to 16-bit destination:

25.1 ms

8-bit source to 16-bit destination:

22.8 ms

On GHz Pandora it might be enough for 60fps, but on original Pandora it's good only for 30 fps.

The source is in the attachment.

It's partially based on byuu's implementation of hq2x from here. You might want to read that first if you want to understand how my version works.

hq2x.zip
 

Attachments

  • hq2x.zip
    5.4 KB · Views: 404
This scalers come almost "free", since they use only bandwidth but almost no CPU time, right? If this is true, can you include this into Drastic, Exophase?

Edit: Great work M-HT.
 
Last edited by a moderator:
Back
Top