[announce] c64_tools (DSP loader and IPC)


@WizardStan / all :

I have just written a set of makefiles to build the DSP source codes. Please find attached an update for / snapshot of the "c64_ccs_projects/" directory.

I have tested the build system with (X86) Linux (it should work on Windows w/ minor pathname modifications, will try that tomorrow. Not that important to me right now, though)

You have to download the following packages first:

   CGTOOLS: https://www-a.ti.com/downloads/sds_support/TICodegenerationTools/download.htm
   DSPBIOS: http://software-dl-1.ti.com/dsps/dsps_public_sw/sdo_sb/targetcontent/bios/dspbios/5_42_01_09/index_FDS.html
  XDCTOOLS: http://software-dl-1.ti.com/dsps/dsps_public_sw/sdo_sb/targetcontent/rtsc/3_25_03_72/index_FDS.html
    IQMATH: http://software-dl.ti.com/dsps/dsps_public_sw/c6000/web/c64p_iqmath/latest/index_FDS.html
   FASTRTS: http://software-dl.ti.com/dsps/dsps_public_sw/c6000/web/c62c64_fastrts/latest/index_FDS.html
 

I installed all packages to "$HOME/ti/" (warning: some package installation paths default to just "$HOME, you'll have to fix the pathname in the installer dialog).

Once set up, it should look like this (the name of your home directory will differ, of course):


bsp@nova:/bsp/pandora/c64_tools$ ls -1d /home/bsp/ti/*
/home/bsp/ti/bios_5_42_01_09
/home/bsp/ti/c64xplus-iqmath_2_01_04_00
/home/bsp/ti/fastRTS_c62xc64x_1_42
/home/bsp/ti/TI_CGT_C6000_7.4.5
/home/bsp/ti/xdctools_3_25_03_72

Now change to the "c64_ccs_projects/" directory (will have to rename it, no CCS needed anymore)

(you have moved it to where you have untar'd the last "c64_tools" release, have you?) and run ". setenv.sh".

You can then build the libraries and the DSP image by issueing "m bin".

(the setenv scripts sets up an alias for calling make)

c64_ccs_projects-26Sep2013.tar.gz
 

Attachments

  • c64_ccs_projects-26Sep2013.tar.gz
    123.5 KB · Views: 175
Last edited by a moderator:
There is restriction for exporting the binaries to "evil" countries, but i saw nothing about non-redistribution.
 
edit: boards are being all double-posty today.
 
Last edited by a moderator:
openpandora server under heavy load and not responding here. have been trying for an hour or so.

EDIT: WTF was that. could hardly read 100 bytes / sec from this server. Then reset my router => back in business (coincidence, probably. tried accessing it from other servers (outside my IP network), timeouts all the way).

@WizardStan: see attachment to previous post (or my own site, in case that happens again)

p.s.: still a bit laggy but seems to work OK again. Just wanted to mention that you currently do not need to download/install the XDCTOOLS package. Seems to be part of DSP/BIOS so you can save yourself an ~200MB download and ~740MB install.

@Linux-SWAT: yep, I also agreed to "I will not transfer or release products, technology, software or software source code of TI or its affiliates to, or for use by, military end users or for use in military, missile, nuclear, biological or chemical weapons end uses". I can live with that *G*. Although: TI were the first to demo / implement laser guided missile systems back in 1965 ([src]: http://www.ti.com/corp/docs/company/history/timeline/defense/1960/docs/65-ti_demos_laser_guidance.htm]).
 
Last edited by a moderator:
@WizardStan: see attachment to previous post (or my own site, in case that happens again)
Excellent. Examples built. Everything seems to be in working order.Now to see what can be done.

This has all been extremely helpful, thank you.
 
I remember when there was a VU Demoing Coding Competition run by Sony with categories for both professional entries and entries from the PS2 Linux community. For those that don't know, the Playstation 2 had two vector units (processors), one of them can be used as a coprocessor, the other must be treated as a separate processor (so you write programs for it, then at run time upload programs on demand, feed data into the VU memory and then do work on the data). The competition provided a 'harness' application that would essentially keep a game loop running, and upload a few bits of data into the start of VU1 memory (controller input, elapsed time, that sort of stuff), and the competition entrants would then write programs that run solely on the VU1 vector unit.

Some more information : http://ps2linux.no-ip.info/playstation2-linux.com/projects/vudemocontest.html

YouTube of winning (professional) entry : http://www.youtube.com/watch?v=ZwD7-bs9hss

It is worth noting VU1 has 16k of program memory and 16k of data memory (and also has direct access to the graphics interface).

Why am I mentioning any of this? Well I wondered if there would be any interest in something like this being run for the DSP? So a harness application is created, then applicants write DSP programs, with the agreed data being put at the start of the DSP accessible memory (controller, elapsed time, whatever), have a prize or two?

I am not necessarily suggesting the competition will or should generate any DSP programs that are particularly useful in their own right, it would be more to just get some interest, get people writing DSP code, and so on.

Just a random thought...
 
Last edited by a moderator:
I'd compete. bsp's most recent update makes it almost idiot proof. Almost, I'm still having some problems. ;)
 
@Steven Craft: Yes, why not. In order for the compo to produce code that will actually be useful, it would make sense to pick a real-world usecase (e.g. a graphics blitter, the DSP really excels at that, I know for a fact. Most likely, no NEON code will beat it unless for mere copy blits).

@WizardStan: You can PM me and if the problem is not too complex, I can maybe help. Maybe this board should have a small "DSP corner" sub-forum ? People could post their ideas / findings / code / questions there. (Admins, what do you think?)

Now what I was originally wanting to post:

Hi all,

 today I spent a little time with some more performance analysis.

 I had already posted some benchmark results, plus the binaries and source-code so you can verify the numbers yourself.

 So, this time I wanted to know what the power consumption looks like.

 First of all I have to thank __wb__ for writing the "System Info" tool, which I used to log the system performance (at 1s intervals).

 I ran three tests:

   1) GPP only fractal benchmark (DSP not running, kernel module not loaded)

   2) DSP only fractal benchmark

   3) Booting the DSP after letting the OS idle for about a minute, then unloading the kernel module and shutting off the IVA2 (DSP) subsystem.

 During all of the tests the screen and backlight were active, and WIFI was enabled.

 Before each test I rebooted the Pandora, just to be sure.


 Test 1: GPP only fractal benchmark

 
plot-power-breakdown_fractal_gpponly.png



 The benchmark finished in ~59.5 seconds. (note: If I had used M-HT's optimized fixed point version it would have been ~20 seconds.)

 As you can see, the ARM Cortex A8 (GPP) is quite power hungry (well, relatively) and the system consumes an extrapolated ~2.7 Watts/hour. Running this continously would therefore drain the battery (~15W) in ~5.5 hours.

(EDIT: not sure how the system info tool determines the GPP W/h. The "actual power" should probably used for comparisons. This would be ~1.8 W/h, i.e. battery drained after ~8.3 hours)

 Before the benchmark starts at ~26:00, you can see that the system consumes ~0.8 Watts/hour in idle mode (=> battery life ~18.7 hours).



 Test 2: DSP only fractal benchmark

 
plot-power-breakdown_fractal_dsponly.png



 I have to say, the result suprises me (then again, I have never analyzed the power consumption of the DSP before).

 The benchmark finished in ~16 seconds.

 Once again, I let the system idle around for roughly a minute before booting the DSP at 30:30.

 At ~31:40 I started the (DSP only) fractal benchmark -- a small/short power consumption spike can be seen in the graph.

 Other than that, the power consumption of the DSP did not change much compared to when it's idle (and yes, I added an "IDLE" instruction call to the DSP image today so that it goes to sleep until the next interrupt (message) arrives).

 During the benchmark, the system consumed ~1.1 W/h.

 This is pretty impressive! Not only is the DSP ~50% faster than the GPP although it is running at only 80% of the GPP clockrate (71.6% on CC/Rebirth Pandoras), but it consumes far less power than the GPP!

 Running this test continously would drain the battery in approximately 13.6 hours (i.e. using the DSP for tasks like this extends the battery life by factor ~2.48).

(EDIT: assuming 1.8 W/h actual power consumption for the gpponly test, the factor is 1.63. still good, >5 hours more!)

 Yay!


 Test 3: DSP idle power consumption

 
plot-power-breakdown_go64.png


 This test also produced a surprising result.

 First, I let the system idle (w/o DSP) for ~1.5 minutes.

 At 52:30 I booted the DSP and let the system idle some more until ~54:00.

 The power consumption went up from ~0.8 W/h to ~0.95 W/h.

 I.e., having the DSP running, doing nothing except waiting for a message interrupt from the GPP costs 0.15 W/h and therefore reduces the battery life from ~18.7 hours to ~15.7 hours.

 At ~54:00 I unloaded the kernel module.
 I was expecting the power consumption to drop back to the initial ~0.8 W/h but instead it increased(!) to ~1 W/h.

 Seems like the IVA2 shutdown does not work as expected, yet.

 A few pages back I said that the DSP idling in the background did not have a noticeabe effect on battery life.
 Now these results provide some more detailed figures. I would still say that, from a user point of view, it is barely noticeable.

 With the current version of "c64_tools" it is not recommended to let the DSP run all the time, though. Maybe there are some more "hardware tricks" to further reduce power consumption in idle mode, I will take another look at the manuals. Fixing the powerdown sequence has a higher prio, though.
 

p.s.: I attached the benchmark binaries (for sources see previous posts), .csv logs and .png plots, in case you want to analyze this yourself

c64_perf_analysis-27Sep2013.tar.gz
 

Attachments

  • c64_perf_analysis-27Sep2013.tar.gz
    353.2 KB · Views: 172
Last edited by a moderator:
i like steven's idea a lot. this thread was waking my interests in coding something for the pandoras dsp. and it at least looks foolproof thanks to bsp's great work. though i have no idea what could be done with it :D
 
Last edited by a moderator:
In those plots, only the red data points are actual measurements. The "breakdown" is just an estimate; for the display it is quite a good estimate, but for the CPU it is a very rough estimate - it basically only looks at cpu usage and clock speed to estimate the power consumption, but in reality, NEON code will consume more power than regular code, and the power-save states are also important. It ignores the dsp, the gpu, the usb host, audio out, etc.

By the way: you may want to do your tests with the display off (lid closed) and wifi disabled.
 
Hi _wb_, are the "Actual Power" measurements reliable ? I used those as the basis for my test conclusions. I figured that the other numbers are mere guesses/extrapolations. I benchmarked with display + wifi on because that's a real-world scenario (this is a graphical/visual benchmark, after all) ppl. probably have their WIFI on most of the time. Last but not least, WIFI+display should just offset the benchmark figures. Your system info utility is pretty cool, btw. I ran the benchmarks with sysinfo showing the "About" page (i.e. no UI updates).

p.s.: the power consumption spikes that can be seen in the diagrams above during idle time are me switching between the sysinfo window and the console (typing some cmdlines)
 
Last edited by a moderator:
heh, I am a dev. so I have WIFI on a lot. I did the benchmarks solely on the Pandora console, though, not via SSH.
 
My wifi toggle button vanished, but I've found that even while connected, unless I'm transfering something, the power consumption is pretty minimal.
 
I don't like having wifi on for that extra 1-2% or so of unstable performance degradation it adds. Drives me crazy when testing how many cycles a loop takes ;p

I do all my dev stuff from USB ethernet which is much faster/more stable than the wifi anyway.
 
that's quite understandable, although in this case, the few cycles spent in the (idle) WIFI driver do not rly make a difference.


Exophase, you should really get into C64+ coding, the TI tools report exact cycle counts, output commented .asm sources interleaved with the original C/ASM code, if you want. This thing is a *BEAST*.
 
Last edited by a moderator:
Back
Top