DSP Corner


Would any of these be good uses for the DSP:

  • OpenAL-Soft (mixing all sources into a single PCM stream of data ready for output)
  • Decode OGG files (both decoding an entire file for sound effects, but also being able to stream a file a little per frame for music)
  • Decode a PNG file
  • Decode a JPEG file
I appreciate the solution isn't a simple port a library and run it on the DSP, but I am wondering more with a bit of work could things like the above be 'good' uses for the DSP? They certainly strike me as the sort of things lots of applications would use, and potentially some of them could be transparent to the developer.

If these ideas are all rubbish, it would be interesting to hear some better ones (ideally ones that either integrate transparently into developers workflow, or go in with minimal effort). I love the idea of some real custom stuff running on the DSP, doing per application specific cool stuff, but realistically it will be a lot more useful if we end up with some easy to use, generic stuff that lots of developers can benefit from.
 
I think the key to success are easy to use predefined functions.
They certainly strike me as the sort of things lots of applications would use, and potentially some of them could be transparent to the developer.
On the one hand this is true ("easy to use predefined functions"). On the other hand, the goal here should not be to totally hide the fact that the DSP is used by some library. E.g. replacing libpng by a version that uses the DSP would be a very _bad_ idea.

Writing a graphics blitter library and using that in e.g. SDL would be a good idea. The same is probably true for a DSP accelerated audio mixer lib used in OpenAL or SDL_mixer.

I have not used OpenAL or SDL_mixer before so I do not know what kind of sample formats they support. E.g. it would not make sense to have float buffers on the GPP side that constantly have to be converted to and from fixed point on the DSP side.

Filter an image (even 128x128 pixels) with average would be something nice to do.

Block matching (4x4 pixels - 4x4) probably too.

Is median filtering possible on a DSP (non linear operation)?
  • OpenAL-Soft (mixing all sources into a single PCM stream of data ready for output)
  • Decode OGG files (both decoding an entire file for sound effects, but also being able to stream a file a little per frame for music)
  • Decode a PNG file
  • Decode a JPEG file
If the block sizes are reasonably large and the call frequency is reasonably high, these would be candidates.

Last but not least, the DSP is turing complete (..) and can execute any ANSI-C or C++ code :)

Whether it makes sense to use the DSP, or not, highly depends on the application.

If the GPP load is already high, using the DSP can make sense even if it has to do a task that the GPP is better at.

If there's a task that the DSP is better at than the GPP, including the "DSP overhead", the DSP should be used.

Preliminary benchmarks indicate that the DSP consumes less power than the GPP when it is under load.

When the GPP is under load, additional use of the DSP seems to have little impact on the overall power consumption (e.g. in the fractal benchmark testscenario ~1.5W w/ DSP, ~1.25W w/o DSP).

There have been a lot of ideas about how to use the DSP:

  • Accelerate video decoding (TI has some free as in beer codecs available, not sure how easy it is to integrate them in e.g. VLC and whether they can replay real-world videos)
  • Accelerate graphics blitters
  • Accelerate audio mixers
  • Accelerate audio decoding (mp3/ogg)
  • Accelerate fixed point vector math
  • Accelerate OPL / FM sound in a DOS or Megadrive emulator (or MAME arcade platform)
  • Accelerate physics engines (i.e. run entire physics engine on DSP) (again, this probably only makes if the ARM is already busy w/ other things)

There are many variables regarding those ideas, though (i.e. what video codes exactly, what kind of sample and pixel formats exactly, ..)

Realistically speaking, it is more likely that some application specific DSP code will be written that may later be refactored to a library (if the code is open or the original dev. does this).
 
The main reasons I wondered about OpenAL-Soft, PNG decode, JPEG decode and OGG decode is that a good number of intensive applications/games will perform these tasks. The OpenAL-Soft mixer is particularly interesting as it already runs on a separate thread and is complete hidden away from the user; at the moment the main CPU bears the cost of this mixing every frame, if this could be moved to the DSP then all applications using OpenAL-Soft would get a bit of extra CPU time per frame. Could be a smallish easy win for most developers.

I have no argument that attempts at blanket optimisations like these won't lead to the ultimate DSP usage, but it could mean that 25% of applications end up using the DSP rather than <1%.

Whatever happens, it will be very interesting! There are certainly more developers interested in this than ever.
 
Last edited by a moderator:
In OpenAL you can enumerate sound devices and select the one you want to use. So the default sound device could be left as it is now and another device could be added that would use the DSP. It would be up to the developer to select the DSP device when needed.
 
First of all, big thanks to bsp for getting the awesome work done on c64_tools, and of course to everyone who has helped him get things working and maintain the enthusiasm!  I've been a bit ignorant of the Pandora lately so coming back to the community and discovering you guys are in full swing on getting the DSP operating is really a nice treat!

I just wanna pitch in my .2c, fwiw, about the name: Its not too late to rename c64_tools to something else.  I urge you to consider doing this, because c64 and tools .. well .. isn't this a bit clashy with the good ol' C64 of POKE ilk?  Just sayin' .. it would be nice if we could easily search for this package with a unique name.

Anyway, it looks like things are going to be stable enough, soon enough, to support a bit of DSP development .. well I've got the sources for Tony Hardie-Bicks' unique DFM-1 filter, which emulates real analog circuitry, and I'm thinking its almost the perfect thing to port to the Pandora DSP environment, for first kicks.. http://entitysynth.net/  (And hear a demo here:  http://jeremah.co.uk/dfm1ugen.html)

Is there going to be any effort to share a VM with the tools already set up, among us, or is it really a matter of having to go ahead and get the tools installed ourselves .. I frankly dread having to set up all the account business with TI.  If there is any way I can convince you guys with a working VM to put it into a vagrant package, this would be the penultimate direction for things to go.  Maybe I can do this work?  I've moved a lot of my systems-development work tools into vagrant, and find it a really superlative way to get stuff written .. so in case any one else is interested in building a Vagrant file we can share among us to have a common DSP toolchain configuration, I'm eager to explore this ..

I really appreciate the summary, bsp .. that really helps set the scene for a return to the Pandora DSP.  I'm *very* happy that you've given us c64_tools - this looks like almost exactly what we need to get things sorted and exploit the remaining power in the Pandora itself .. 
 
I just wanna pitch in my .2c, fwiw, about the name: Its not too late to rename c64_tools to something else.  I urge you to consider doing this, because c64 and tools .. well .. isn't this a bit clashy with the good ol' C64 of POKE ilk?  Just sayin' .. it would be nice if we could easily search for this package with a unique name.
wasn't this discussed already in the announce thread ? my stance on this is that it was TI's decision to name their DSP "C64". Yes, I could have chosen a different name and called the package e.g. "snørkelzørg" but I wanted the name to have a reference to the DSP's name. Besides, the name is quite irrelevant since only developers will see it, just like normal users will probably not notice that there's e.g. a software called "dbus" running in the background.

Anyway, it looks like things are going to be stable enough, soon enough, to support a bit of DSP development .. well I've got the sources for Tony Hardie-Bicks' unique DFM-1 filter, which emulates real analog circuitry, and I'm thinking its almost the perfect thing to port to the Pandora DSP environment, for first kicks.. http://entitysynth.net/  (And hear a demo here:  http://jeremah.co.uk/dfm1ugen.html)

I think I listened to demos of that synths before. It sounds good although it's hard to judge without trying it out in practice.


The nice thing about synth engines is that they usually do not require a lot of code. A port of this would be a nice addition to the "core" DSP image, resp. could be implemented as an overlay in case the "core" does not remain FOSS because of some video codec or whatever (the DFM-1 is GPL and would therefore "taint" the core image / not be compatible with the license of a closed source video codec for example).


For performance's sake, the code should be converted to fixed point (via iqmath), if that is possible (it uses double precision floats ATM).

On the topic of synths in general: Once we have a synth engine (or multiple ones) on the DSP, we should think about defining a standard interface to access these things. I mentioned that I am developing a music editor (since 2007) which does not only support VST but also a custom synth plugin interface which is a lot more simple than VST (but has less bells and whistles which are required for editors). The reason for that was that I needed a portable plugin interface for a standalone replayer so that music made in the editor can be replayed in games/demos/.. in realtime, i.e. no lossy mp3 exports necessary.


The editor, and especially the replayer have not been been publically released, yet. Some previews are available but a lot has changed since the so I will not link to them.


Just wanted to mention this since you might want to keep that in mind.

Is there going to be any effort to share a VM with the tools already set up, among us, or is it really a matter of having to go ahead and get the tools installed ourselves .. I frankly dread having to set up all the account business with TI
I might be wrong but usually this kind of software is prohibited from being redistributed.

(..TI probably would not care much if someone send you a VM image through some back channels (not me, I'm not using a VM anyway)..)

Besides, setting up an account at ti.com is no biggie. Maybe you can even use http://mailinator.com/ or similar to register / get to the cgtools download link.


As a reminder, the following packages should be downloaded to compile the DSP side sources in the c64_tools package:

     CGTOOLS: https://www-a.ti.com/downloads/sds_support/TICodegenerationTools/download.htm (v7.4.5)

     DSPBIOS: http://software-dl-1.ti.com/dsps/dsps_public_sw/sdo_sb/targetcontent/bios/dspbios/5_42_01_09/index_FDS.html

      IQMATH: http://software-dl.ti.com/dsps/dsps_public_sw/c6000/web/c64p_iqmath/latest/index_FDS.html

     FASTRTS: http://software-dl.ti.com/dsps/dsps_public_sw/c6000/web/c62c64_fastrts/latest/index_FDS.html


In the announce thread, crow_riot said:

What comes to my mind is: would it be somehow to get the hardware mixers address like we have the framebuffer address into the DSP API, so that we could directly output sound from the DSP ?
The answer is: yes, it is. The question is: is it really worth the effort ?


It's not just about getting the hw addresses, you would also have to know which sound buffer is currently the back buffer and which one is the front buffer (the one currently being replayed).


There's no technical reason why it should not be possible to add support for this feature to e.g. ALSA but since audio uses very little bandwidth (~0.183 MBytes/sec for 16bit@48Khz stereo) the required kernel-brainsurgery might not be worth the gain.


When I got my Pandora I took a look at the Linux audio driver (and its dependencies) to see if the driver could be ported to the DSP with little effort. My conclusion was that this  a) would not require "just some little effort"  and  B) it would interfer with the ARM-side driver anyway, and again  c) the performance gain would not justify the effort.

I really appreciate the summary, bsp .. that really helps set the scene for a return to the Pandora DSP.  I'm *very* happy that you've given us c64_tools - this looks like almost exactly what we need to get things sorted and exploit the remaining power in the Pandora itself ..
thanks :) It might be a while until there will be a Pandora successor, if ever, so we really should make the most of the hardware we already have.
 
I wonder if Pure Data is DSP-friendly.
I took a quick look -- Its audio engine does not seem to support multiple processor cores, i.e. when run on a quad-core system it only utilizes one of the four cores.

The PD forums mention a workaround for that which they call "the poor man's multithreading": Run the PD UI in one process and the audio-engine in another process.

I do not know about the PD internals and its implementation / dependencies but it should theoretically be possible to run the UI on the GPP and the audio-engine on the DSP.
 
I'm still not quite sure how the DSP interacts with the system, do you have to send it data and get a response back asynchronously? or can you point it to a block of memory and it performs updates on that?

A couple of ideas I thought might be useful, if possible:

1. Streaming/capturing video of the Pandora's screen. Could you set it to the framebuffer or something and stream it out to a video file or out to some internet location?

2. Partical system. I was thinking you point it to a partical array, and it just constantly loops, applying movement and decay. (Could even be used against normal game objects as well?)

The first one I wouldn't have a clue how to implement anyway, but was wondering. The second, I'm kind of interested in, with a project idea I have.
 
I'm still not quite sure how the DSP interacts with the system, do you have to send it data and get a response back asynchronously? or can you point it to a block of memory and it performs updates on that?


A couple of ideas I thought might be useful, if possible:


1. Streaming/capturing video of the Pandora's screen. Could you set it to the framebuffer or something and stream it out to a video file or out to some internet location?


2. Partical system. I was thinking you point it to a partical array, and it just constantly loops, applying movement and decay. (Could even be used against normal game objects as well?)


The first one I wouldn't have a clue how to implement anyway, but was wondering. The second, I'm kind of interested in, with a project idea I have.
The 2nd point is more along the lines of what I was thinking could be done via DSP. If I remade CromoZome I could have the AI running on the DSP and the GPP would just handle the redrawing of objects. I figure it doesn't care what current state the DSP is in and just dumbly redraws the current positions of all the snakey segments.

This would be similar in drawing the positions of particles while the DSP continuously calculates their positions.
 
Actually, with the point of using the DSP for blitting, does that mean that Notaz's SDL would be modified to use the DSP internally? So our regular SDL_BlitSurface is automatically passed to the DSP?

This could/would provide some bonus to games without needing a rebuild.

If I remade CromoZome I could have the AI running on the DSP and the GPP would just handle the redrawing of objects
From my understanding, you'd be doing that the other way around - DSP for drawing, and GPP for AI and game logic. I'm sure somebody will correct me if I'm wrong though
 
Last edited by a moderator:
1. Streaming/capturing video of the Pandora's screen. Could you set it to the framebuffer or something and stream it out to a video file or out to some internet location?
You can't access files or Internet from DSP, as all drivers run on ARM.
Actually, with the point of using the DSP for blitting, does that mean that Notaz's SDL would be modified to use the DSP internally? So our regular SDL_BlitSurface is automatically passed to the DSP?
It could be done but would not be faster than doing it with NEON+preloads, which is what's mostly done now. Of course there would be benefit of having ARM free while DSP does stuff, do we actually have any SDL 1.2 games that are too slow?
 
You can't access files or Internet from DSP, as all drivers run on ARM.
I assume you could take the framebuffer and encode it in the DSP then send it to an ARM application to construct the video or stream it out though, right?
do we actually have any SDL 1.2 games that are too slow?
Not that I'm aware of. So, yeah, pointless exercise I guess (well, maybe not pointless, but not really worth it)

I've got so many questions, but it feels rather pointless asking them. I've not done any coding in over a month. I have a game that's probably a couple of hours work to finish up and release, just not had the time.
 
You can't access files or Internet from DSP, as all drivers run on ARM.
I assume you could take the framebuffer and encode it in the DSP then send it to an ARM application to construct the video or stream it out though, right?
Yeah, but we'd need a DSP video encoder of some sorts, and ARM would need some CPU time to read out the buffers and send them to SD regularly. If such application is made, people would most likely want sound too..
 
Would there be enough time to capture both?

Is it possible to capture the video regardless of what context is used for drawing? ie. Does drawing via EGL or GLES still created the rendered screen in the framebuffer? or would it require a different method of capture?
 
If I remade CromoZome I could have the AI running on the DSP and the GPP would just handle the redrawing of objects. I figure it doesn't care what current state the DSP is in and just dumbly redraws the current positions of all the snakey segments.
This means some kind of syncing is needed. First calc the AI for 10 objects on the GPP. Then let the DSP draw the object, while the AI handles the next 10 objects. So for the 10 first objects need speed improvement was made. But for all objects (but the 10 last ones), they will be calculated and drawed in the same time. So it could be twice as fast (depending on a lot of stuff).
 
The 2nd point is more along the lines of what I was thinking could be done via DSP. If I remade CromoZome I could have the AI running on the DSP and the GPP would just handle the redrawing of objects. I figure it doesn't care what current state the DSP is in and just dumbly redraws the current positions of all the snakey segments.

This would be similar in drawing the positions of particles while the DSP continuously calculates their positions.

Sorry for the late reply but I read your post a while ago and just for fun did the opposite thing and implemented a particle ("sprite") renderer on the DSP -- the GPP just updates the sprite positions.

From my understanding, you'd be doing that the other way around - DSP for drawing, and GPP for AI and game logic. I'm sure somebody will correct me if I'm wrong though

yep, sounds reasonable, at least from what my tests so far have shown.

Actually, with the point of using the DSP for blitting, does that mean that Notaz's SDL would be modified to use the DSP internally? So our regular SDL_BlitSurface is automatically passed to the DSP?
It could be done but would not be faster than doing it with NEON+preloads, which is what's mostly done now. Of course there would be benefit of having ARM free while DSP does stuff, do we actually have any SDL 1.2 games that are too slow?

(@notaz) granted, you posted this a while back but what do you think now: Is this still true ?


The "dsprite" DSP example renders its (alphatested / alphablended) sprites at approximately 113 mpix/sec (ARGB32) resp. 226 mpix/sec (RGB565, to be implemented, though (along with other more obscure formats, like 8bit indexed sources). the important thing here is the memory throughput (read src/dst + write dst)).


Disregarding that the ARM would be completely busy with rasterizing, is the NEON+preloads approach really faster than that ?
 
Actually, with the point of using the DSP for blitting, does that mean that Notaz's SDL would be modified to use the DSP internally? So our regular SDL_BlitSurface is automatically passed to the DSP?
It could be done but would not be faster than doing it with NEON+preloads, which is what's mostly done now. Of course there would be benefit of having ARM free while DSP does stuff, do we actually have any SDL 1.2 games that are too slow?
(@notaz) granted, you posted this a while back but what do you think now: Is this still true ?


The "dsprite" DSP example renders its (alphatested / alphablended) sprites at approximately 113 mpix/sec (ARGB32) resp. 226 mpix/sec (RGB565, to be implemented, though (along with other more obscure formats, like 8bit indexed sources). the important thing here is the memory throughput (read src/dst + write dst)).


Disregarding that the ARM would be completely busy with rasterizing, is the NEON+preloads approach really faster than that ?
SDL game, no that I am aware of.

But Allegro 4 yes. If you want to consentrate on DSPize an library, it better be Allegro 4.4 and/or OpenAL...
 
Last edited by a moderator:
This is a very simple audio synthesizer that could be put to use with our DSP:

 


Code:
  #include <math.h>
  int main()
  {
      int v, i, z, n, u, t;
      for (v = -1; {
          for (n =
               pow(1.05946309,
                   "CWG[Cgcg[eYcb^bV^eW^be^bVecb^"[++v & 31] +
                   (v & 64) / 21), i = 999; i;
               putchar(128 + ((8191 & u) > i ? 0 : i / 8) -
                       ((8191 & (z += n)) * i-- >> 16))) {
              u += v & 1 ? t / 2 : (t = v & 6 ? t : n / 4);
          }
      }
      return 0;
  }

 

.. put the above function on the DSP side, write a buffer, read the buffer in the host program, spit it to audio .. 

 

(If I had the DSP tools set up, I'd do it - but since I don't right now, anyone of you guys want to try a quick synth DSP app?)
 
Last edited by a moderator:
@torpor:

harhar.. that really puts a Korg Radias (which uses a much slower version of the TI DSP) to shame :p

For those who don't know about these kind of hacks, google for "Experimental music from very short C programs". It's just 8bit style random blips and bleeps..

didn't you want to port dfm1 ?
 
 
Back
Top