Running scalers on DSP


If you are scaling at 30fps, scaling on the DSP would give you a 100 * 6.6/33.33 percent performance boost for the arm core, ignoring overhead. Maybe aprox 20 percent real time per frame freed up for the core.


So, are there any emulators running 25-30 fps that currently require more than 80 percent CPU and would benefit from 2xSAI?
 
Thats quaite an improvement.


Did you try to compare the scalers when the system is under load?


I mean, would it be faster to do the scaling on the dsp while the arm runs an emulator or is it still slower than doing it both on the arm?
I tried the DSP scaler while running the arm/neon scaler in another process. When running the arm scaler, the speed was the same, when running the neon scaler, the DSP scaler took 0.1 ms longer. It seems that the DSP is mostly unaffected by the CPU.

If you are scaling at 30fps, scaling on the DSP would give you a 100 * 6.6/33.33 percent performance boost for the arm core, ignoring overhead. Maybe aprox 20 percent real time per frame freed up for the core.


So, are there any emulators running 25-30 fps that currently require more than 80 percent CPU and would benefit from 2xSAI?
Since you can't ignore the overhead, that makes it only about 12 percent more time for the emulator when using the DSP scaler instead of using the ARM scaler.
 
Yes, but it's still 5 percent less time for the emulator when using the DSP 2xSaI scaler instead of NEON scale2x scaler (at 30fps).
 
Last edited by a moderator:
Just for the record, I am certainly following this thread with interest. I used to do a lot of PS2 vector unit programming (vector units being capable of executing separate executables to the main core in parallel) and I found it to be almost addictive. I went through a number of variations of experimenting how to get data into/out of the vector units, certainly one of the popular (decent performance) options was using a quad buffer, where the idea was (assuming buffers #0, #1, #2, #3 are each equal chunks of memory accessible to the vector unit, and below, for a single number, the letters execute in parallel, i.e. 3a, 3b and 3c run at once):


Start up


1. Upload data to be processed to buffer #0


2a. Process data in buffer #0, putting results into buffer #1


2b. Upload data to be processed into buffer #2


Main loop


3a. Download processed data from buffer #1 into main memory


3b. Upload new data into buffer #0


3c. Process data in buffer #2, putting results into buffer #3


4a. Download processed data from buffer #3 into main memory


4b. Upload new data into buffer #2


4c. Process data in buffer #0, putting results into buffer #1


On the PS2, you could transfer data into/out of the units via the DMA controller, which didn't tie up the CPU (except for memory contention), and with the above set up the vector unit would constantly be processing (you could transfer data quicker than it could be processed). There is also a variation for the second vector unit (VU1) as that has a direct path to talk to the graphics interface, so instead of downloading processed data to main memory, you could send processed data directly to be drawn.


It sounds like this may all be mostly uninteresting for the DSP, as the DSP has access to shared memory, but I thought I'd just put it out there, in case it is either of help, or opens up different ideas.
 
Wow, very cool research! Hope we can eventually pick some fruits from this :D
 
Some news:


I reduced the overhead by about 1.1 ms. The framework was invalidating the cache on the output buffer when sending and also when receiving message to/from the DSP. It's not necessary when sending the message so I commented it out.


I tried to implement and optimize SuperEagle and Super2xSaI scalers, but they were slow for 30 fps.


I implemented the Eagle2x scaler, changed the Scale2x scaler and put it together with 2xSaI scaler in case someone wants to test it: dsp-scalers.tar.bz2


Current times of the scalers are:


Scale2x: 10.8 ms


Eagle2x: 10.7 ms


2xSaI: 22.4 ms (24.7 on the worst possible source image)
 
I reduced the overhead by about 1.1 ms. The framework was invalidating the cache on the output buffer when sending and also when receiving message to/from the DSP. It's not necessary when sending the message so I commented it out.
Interesting, could be that remaining time is also dominated by invalidation? Perhaps it's worth checking how much time invalidation takes?


At least for ARM9, kernel's invalidation function is (was?) rather poor, it's walking all virtual address range of supplied buffer in cacheline-sized steps and doing mcr on every step, and each mcr takes hundreds of cycles (if not more). I was hit by this while doing Wiz/Caanoo gpSP port; the solution was simply to invalidate everything instead (the old GP2X had it like that too). This has the downside of throwing good data out of cache, but was still way faster than walking all buffers. (I also remember reading that invalidate-all is dangerous, but can't remember the details now).
 
I tried commenting out all cache invalidating/flushing and the resulting overhead was about 0.7 ms - that's just message send and receive.
 
At least for ARM9, kernel's invalidation function is (was?) rather poor, it's walking all virtual address range of supplied buffer in cacheline-sized steps and doing mcr on every step, and each mcr takes hundreds of cycles (if not more).
I think this is the only way to do it, because there aren't any instructions to invalidate a range of memory (just a cache line).

I was hit by this while doing Wiz/Caanoo gpSP port; the solution was simply to invalidate everything instead (the old GP2X had it like that too). This has the downside of throwing good data out of cache, but was still way faster than walking all buffers. (I also remember reading that invalidate-all is dangerous, but can't remember the details now).
It's dangerous probably because when you invalidate all, you loose everything that's written in cache and not in memory.
 
I did some new tests using c64 library by bsp.

Test details:

scaled images: 320x240 16-bit colors

source buffer allocated using dsp_shm_alloc with parameter DSP_CACHE_W

destination buffer allocated using dsp_shm_alloc with parameter DSP_CACHE_NONE

Results for scale2x:

using algorithm from previous tests:

4.42 ms (4.37 ms using fastcall)

using DMA to transfer data from source buffer to L1DSRAM and scaling from L1DSRAM to destination buffer (DMA transfer is realized parallel to the scaling):

3.82 ms (3.69 ms using fastcall)

scaling from source buffer to L1DSRAM and using DMA to transfer data to destination buffer (DMA transfer is realized parallel to the scaling):

1.67 ms (1.61 ms using fastcall)

using DMA to transfer data from source buffer to L1DSRAM, scaling from L1DSRAM to L1DSRAM and using DMA to transfer data to destination buffer (DMA transfers are realized parallel to the scaling):

1.02 ms (0.88 ms using fastcall)

Interesting results since using my neon scaler takes 1.74 ms (source and destination buffers were allocated using malloc).

I also tested 2xSaI scaler, with following results:

using algorithm from previous tests:

5.33 ms (using standard call)

using DMA to transfer data from source buffer to L1DSRAM and scaling from L1DSRAM to destination buffer (DMA transfer is realized parallel to the scaling):

6.16 ms (using standard call)

scaling from source buffer to L1DSRAM and using DMA to transfer data to destination buffer (DMA transfer is realized parallel to the scaling):

5.86 ms (using standard call)

using DMA to transfer data from source buffer to L1DSRAM, scaling from L1DSRAM to L1DSRAM and using DMA to transfer data to destination buffer (DMA transfers are realized parallel to the scaling):

5.17 ms (using standard call)

Much faster than in previous tests and slightly faster that ARM version, but it would still be better to do some processing on ARM in parallel.
 
heh, I totally missed your post. Very nice work!

So in the best case this means 0.88ms via DSP vs. 1.74ms via ARM/NEON, right?

If you now consider that the DSP runs at a (much) lower clock rate, and also take the power consumption into account, which is a lot less than ARM/NEON, possibly ~1W DSP vs. ~2W ARM, at least judging from my own tests, these results are even more impressive ;)
 
The results are impressive, although depending on how you use the DSP, you may need to copy data to/from shared memory or invalidate/writeback caches, which would increase the time.

But I'm a bit baffled by 2xSaI results. First there's the large difference in time between these tests and the previous tests (using dspbridge). And then the tests with DMA transfers  are slower than without them. I'll have to look at it sometime.

___

I want to test scale2x when the source has 256 colors (8-bit palette) to see how it performs compared to arm/neon. I also want to test hq2x, but I don't think I can get it to run at usable speeds.
 
What I find a bit surprising about the 2xSaI results is that the version that scales from memory to memory is only a tiny bit slower (5.33ms) than the version that does the scaling in L1DSRAM and uses DMAs to stream the data from/to RAM (5.17ms).

Are you sure that the code is 100% correct ? (e.g. accidentally using L3 view SRAM addresses will slow things down a lot)

The performance difference to the DspBridge version (23.53ms) is really quite huge. Even after subtracting the overhead (3.43ms), your new version (exactly the same code?) is almost 4 times faster. That surely cannot be explained by DSP clock rate differences.

Maybe you used different compiler optimizations in your previous test ?

For complex algorithms (in terms of register usage), even small code changes can make huge differences. I noticed that when working on a DSP planar-to-chunky routine -- one tiny, inconspicuous change changed the throughput by almost factor 2.5.
 
I checked my tests with dspbridge and I found out why that version of 2xSaI was so slow. It was a really stupid error.

When I was optimizing the algorithms I was compiling individual source files to generate .asm files so I could see the generated assembler and software pipelining information. Here I was using parameter -O3 when compiling.

In earlier benchmarking (Scale2x) I was also compiling with parameter -O3.

But in later benchmarks (2xSaI, ...) I was compiling without the parameter. Which means the compiler wasn't using any optimizations. Really stupid of me.

Regarding the small time difference in 2xSaI between using DMA and not:

2xSaI inner loop is much longer than Scale2x inner loop, it containes branches so it's not software pipelined. Scale2x inner loop is software pipelined and uses hardware sploop.

Also, the DMA transfers have the highest priority versus lowest priority of CPU bus access.

I don't know how this affects result. Maybe it would help if DMA transfers had lower priority then CPU.
 
By CPU you mean DSP, right? At least in the last version ("scaling from L1DSRAM to L1DSRAM"), the DMA priority shouldn't matter since the RAM is only accessed by the DMA, not the DSP.

If you want to try different DMA priorities, take a look at libc64_dsp/src/dma.c, around line 133. Changing the read rate (a few lines down) might also be interesting in case you worry about bus cycles.

Earlier this evening I had a quick look at (one of the) 2xSAI implementations. It really contains a lot of branches so memory throughput most probably is not the bottleneck of that algorithm. The memory bandwidth used by the L1DSRAM-to-L1DSRAM version of your scaler is ~29.7 MB/sec for the reads, plus ~118.8 MB/sec for the writes (the highest write throughput for the DMA I measured so far is ~993 MB/sec (SRAM to mem. copy), resp. ~530 MB/sec (copy from RAM to SRAM via DMA, process in SRAM via DSP, then copy from SRAM back to RAM via DMA -- but this was with a very DSP-friendly, SP-looped routine)

In post #19 you said that the ARM version, which interestingly was faster than the NEON version, takes 6.6ms, so 5.17ms for the DSP version is still quite respectable, especially because of all the conditionals / branches.
 
Last edited by a moderator:
Back
Top