@Letalis Sonus: Sounds like every major player has its own API then, Intel=VA-API, AMD=XvBA, NVidia=VDPAU, Imagination=also VA-API?, TI/OMAP=? Anyway, I don't have much hope for proper video acceleration on OMAP3. Hasn't happened in 5 years so why should it now. Hopefully this will be different with OMAP5, although from a HW point of view the video accelerator looks similar to the one in OMAP3. It also contains the iLF and iME modules and adds iPE (intraprediction), MC3 (motion compensation), CALC3 ((I)DFT?), and ECD3 (whatever that is). Maybe there will be software available for that next time..
@eumnehS: ekianjo is probably right, there really is just a handful of people who bother to optimize for this particular platform, and a DSP compo would not make much sense, especially if this handful of devs would compete on one particular subject, like "fastest scaler" or "fastest blitter". What does make sense, IMHO, is to use these limited dev resources to create different kinds of DSP 'libs', accompanied by small and easy to use GPP wrapper libraries so other devs can easily use them. Ideally these libs should also be available as portable C versions so development can still be done on a PC (for the most part) (this was already suggested much earlier in this thread but back then there were other issues that had to be dealt with, first)
Like M-HT already indicated, there are different kind of devs. Some like to spend hours optimizing tiny fragments of code to make it as efficient as possible, some like to create libraries and 'frameworks' just because that can be fun, too. Some are interested in looking at and porting existing code (much to learn, instant user gratification in case of apps), some like to write new software by using building blocks created by other devs, and so on.
As I mentioned in the first paragraph, the people who bother to write new DSP code (or port/optimize existing code to/for the DSP) should invest some time to make these 'building blocks' easily accessible to other devs so the code eventually gets used in actual 'end-user' software.
For my part, I'll package my sprite engine resp. create a GPP wrapper library for it. Maybe it would make sense to merge this with M-HT's scaler efforts? Some of the code in the GPP wrapper will be common to both 'engines' so that would avoid duplicate efforts.
@M-HT: That sprite engine uses some fairly generic commandlist interface. From a design point of view, I intended it to serve as a simple interface to the DSP for a collection of 'various' functions, rather than a graphics-framework specific interface with lots of options that, if combined, would require/implicate a lot of different render loops, for each of the possible feature-combinations. It should be fairly easy to integrate your scaler functions. If you don't object, could you please point me to the latest version of your scalers so I can give that a try?
Regarding OPL emulation: Are you really sure that your performance measurements are correct ? Even when overclocked to 800Mhz, 1% CPU usage would mean 8,000,000 cycles per second. OPL3 has up to 18 channels and even when used in 3*2 op + 6*4 op + 5 drum channels (6 ops) mode, that's still 14 channels / 36 FM operators that need to be emulated. At a sample rate of 48Khz (OPL3 uses 49.7Khz), that only leaves (8,000,000/(48000 * 36)) = 4.63 cycles per operator. That doesn't sound right to me (and that's still without envelopes/LFOs).
I still think that OPL emulation would be a nice task for the DSP and since in that DOS-emu scenario the DSP wouldn't be used for anything else, free DSP cycles could be used to improve audio quality and/or add some ear candy (e.g. reverb, eqs, ..).
@ekianjo: besides my comments regarding the DSP compo (see above), I just wanted to say that you really shouln't compare programming the DSP to rocket science
It's far easier/less work than GLES2 / shader programming, for example. As far as my big plans for the DSP are concerned: I don't really have any, neither. However, I will use the DSP but I'll rather treat it as some special purpose coprocessor and will mostly run plain C code on it. E.g. for 2D graphics it's faster than the GPU (or the GPP) and also more flexible.
@slaeshjag: Then you have little imagination
But really, I can think of tons of software that could be written for the DSP but which I won't write (also not for any other processor) since I rather spend my time doing something else. Spare time is limited, after all. Besides, if I had known beforehand how much time this whole DSP driver task was going to take in the end, I might not have started it at all. Didn't want to leave it unfinished, though.
@rohezal: Yep, that's what we need. Except that the code will look a bit different (you can't simply pass pointers to paged memory to the DSP).
@crow_riot: The hardware scaler is not a replacement for Scale2x / 2xSAI. They both can be combined, though, i.e. the DSP scalers could be used to e.g. scale 320x240 to 640x480, then the HW scaler could expand that image to fill the screen, if you don't care for the wrong aspect ratio or non-integer scaling artefacts.
As far as DSP stability / NAND issues are concerned: Notaz seemed quite confident that these issues are now resolved. The stress test on his unit ran fine for 24h. Before the fix, things went haywire rather quickly.
It would be nice if the people who previously reported these issues on their CC/Rebirth units (magic_sam, M-HT) would update their firmware and run the DSP stresstest to confirm that the fix works on their Pandoras as well.
@Levi: exactly -- but thanks to the ppl who did, it eventually became stable.