[announce] c64_tools (DSP loader and IPC)


Binaries for games most likely won't have big arrays of vector multiplies to do the 3D work - they'll use the graphics card to do that work via a DirectX or OpenGL interface, I think.
Old games did the vertex transformation via cpu not gpu. Transform the geometry on the cpu, lighting and fillig the pixels on gpu. My laptop (mopile Sempron with AMD on board gpu) has only a hardware pixel shader too.
 
Last edited by a moderator:
Binaries for games most likely won't have big arrays of vector multiplies to do the 3D work - they'll use the graphics card to do that work via a DirectX or OpenGL interface, I think.

I too would like to understand better the difference between the NEON lump of silicon and this DSP. In my mind, a DSP is something that implements a SIMD methodology for working on streams of data, and that's about as far as I understand them - and that's exactly how NEON is described. Like you say, NEON can handle floating point numbers (albeit at only 32-bit resolution, apparently) while the DSP is integer only, but other than that, what's the difference?
The DSP is a VLIW processor. Its instructions contain up to 8 separate operations, literally 8 separate 32-bit instructions. That makes it more like MIMD than SIMD (although it has some SIMD thrown on top of that). You can basically think of it as this processor with 4 execution units where each unit can handle different sorts of things and you tell which unit what you want it to do each cycle.. only that there are two of these blocks of 4 units, so 8 total. There's no interlocking, no branch prediction, and everything is pipelined with many operations having tons of "delay slots" - so you have to be very mindful of what operations are currently in flight when you go to issue new ones. It's not easy to write efficient assembly for.
 
@rohezal Ah okay. I thought by then 3d libraries and graphics cards were advanced enough for games to have given up on software rendering by then, but I'm probably wrong. Either way, pick another game and your points still stand.


@Exophase Thanks for the explanation. So the processor tick is 32x8=256 bits along each tock, and each 32 bit instuction is fed into something like a simple processor which doesn't manage any of the dependencies between the other processing units, wheras NEON just takes a single instruction each tick and runs it on 16x2 32-bit numbers to produce 16 32-bit numbers (or 8 64-bit integers). Interesting, but I'll not ask any of the questions running around my head without doing a bit more research first.
 
I started working on a graphical demo / DSP benchmark today (I used an old demoeffect from the 90ies, very FPU intensive).

Sry, no bins, yet. But here are some preliminary benchmark results: I wanted to see how the DSP performs at single precision IEEE float ops (which have to be emulated since the DSP in the Pandora has no FPU (I want a C67!).

I have to say, I'm quite impressed. The float emulation in TI's compiler / libs is pretty good.

My 'benchmark' renders a julia attractor fractal (thanks to Jet for the original code, btw).

(this includes GFX output via SDL resp. /dev/fb0, btw)

This is when the whole frame is rendered entirely by the Cortex A8:


[...] 300 iterations in 4839 millisecs.

This is when the C64+ DSP renders half of the frame in parallel to the GPP:


[...] 300 iterations in 7521 millisecs.

i.e. the ARM needs ~2419.5 millsecs for half the frame, the DSP needs ~5101.5 millsecs for the other half.

At first I thought -- boy, that sucks, that's slow. But on second thought I find it quite impressive that the DSP software float emulation is half as fast as the hardware FPU of the GPP (to be fair: the DSP runs @800Mhz, the GPP @1000Mhz, so the DSP software float emulation is actually performing at ~63.1% of the GPP hardware FPU speed if both were running at the same clock rate).

p.s.: I used TI's "mthlib/include/fastrts62x64x.h" routines (see http://processors.wiki.ti.com/index.php?title=Software_libraries#FastRTS). When the code is compiled with the regular float emulation, it takes 18935 milliseconds for one frame (i.e. 16515 ms for the DSP's second half of the framebuffer). That's slow, so use that library if you want performance (mpysp(), addsp(), subsp(), divsp(), ... calls). I also tried the "inlined" version of the library ("C_fastRTS"). It did not produce the correct results so you may want to avoid it.

p.p.s.: In case you have CCS installed: Make sure to update to the latest cgtools (23-Aug-2013, version 7.4.5)

p.p.p.s.: What I said about having to reboot the Pandora each time the DSP image changes: That's not quite true. I have not rly figured out when the reboot is necessary but usually unloading the kernel module ("rmmod c64"), waiting a couple of secs, then re-running ./go64.sh after updating the DSP image seems to work most of the time.

p.p.p.p.s: Needless to say, this is a worst-case scenario of using the DSP but I figured that many devs will just want to port existing code w/o worrying much about about DSP-specifics. My verdict: TI (..and in particular: fastRTS) rocks.
 
Last edited by a moderator:
Don't forget that VFP on the Cortex-A8 is also slow. Figure about 7 cycles per instruction for single precision. Much worse if you're doing a bunch of divisions. Also huge penalties if you're converting from floats to ints, or if you're doing flow control based on floating point comparisons. Not as bad as full on emulated but still bad. The story would be very different if the ARM side was optimized to use NEON. Yes, TI's soft-float is good, but still something I'd generally avoid at all costs.
 
Last edited by a moderator:
what's much worse ATM is that I installed the latest cgtools..I have no clue what that Windows Setup Magic did, but my regression tests show a factor 8+ performance degredation regarding TC_03 (irq benchmark, now +36s instead of 4s). Reverting to yesterdays source snapshot did not change anything (and yes, I did re-select the older compiler toolchain in the IDE and according to the build logs it was used). Re-installing CCS right now. Damn I hate computers, sometimes.

(and now that the CCS setup is finally complete, nope, still damn slow. the setup was downloading tons of (updated) stuff from TI servers, though).

the precompiled binaries from my last release still run fast, so that's no HW issue. W.T.F. it's almost 6 am here at the other side of the ocean so I'll have to sleep now.

EDIT: my bad, may bad. I compared the binaries, they are exactly identical except for the timestamp. The slowness was due to a recent change in libc64. Yep, that's what happens if you stay up too long. Earlier this evening I was having quite some stability issues while running a stresstest (the fractal demo) that renders into a CMA allocated block. The system freezed randomly, even without any DSP interaction. Have to look into that later today.

EDIT#2 [7:39 am]: OK, I think I pin-pointed the problem. It was a harmless looking volatile* into the shared memory (went back to the mem. gap config, did not change a thing so CMA seems to work fine). Removing the volatile attrib fixed it. Unbelievable, what's it with those compilers and volatile pointers ?!! How can this freeze the entire OS ?!! (I also ran into the next issue -- the dsp_cache_inv() function causes instabilities / occasional freezes, as it seems (enabled that one once the last problem was solved). Well, you can see the ARM cachelines refresh ATM so this is not ready for release, yet. I am happy for now, though, at least the graphics test has been running w/o crashes (and constant app. restarts every 300 frames) for ~15 minutes. Time to hit the sack. *phew*
 
Last edited by a moderator:
So I guess I'll wade through the hassle of getting CodeComposer running in my environment (no Windows box available at the moment, so I'll have to build that first..) because this thread is getting *VERY* exciting, and I'd like to see what sort of basic synthesizer could be built for the Pandora DSP, at last .. ;)
 
Please do not use floating point math in your synth, if that's possible.

My yesterday's conclusion regarding the FPU emulation speed was flawed, by the way: In the source code, it looks like this

  process_dsp();

  process_gpp();

but that does not mean that this is serial code (of course it is not!). process_dsp() just sends a message to the DSP so the pixel processing is done in parallel. This means that the DSP takes 7521 millisecs for half a frame and the ARM does its half in ~2419 millisecs. I.e. single precision IEEE software FPU emulation on this DSP is ~3.1 slower than the ARM's hardware FPU (2.48 times if they were running at the same clockrate). That makes much more sense!

(this is still not bad though -- even if an application uses the DSP just for float vector/matrix processing, it's almost like having an ~330 MHz faster ARM (Ghz edition here). There is little RAM I/O needed for this so the DSP should not have a negative impact on the GPP/application performance.)

Ah yes, I left the graphics/float benchmark/stresstest running over night and it did not crash for ~5 hours.

(heh and you really know that you have stayed up too long if you are going to bed and reach for the lamp in order to turn off the sunlight. d'oh!)
 
Last edited by a moderator:
HEUREKA! :)

now this is worth a double post (disregarding the netiquette and all ;) ):

I just tried out TI's iqmath library. It is basically fixed point math but you have 31 variants of all functions that let you select the number of integer/fractional bits. You can mix datatypes. Pretty sophisticated.

Now, after same careful study of the C64+ disassemblies, to make sure that software pipelined loops and function inlining was working properly, I have some very good news:

Forget previous benchmarks, I had a usleep() there in between frames so both numbers were not rly correct.

Check this out (now without usleeps):

Cortex A8 hardware floating point fractal benchmark:


[...] 300 iterations in 4453 millisecs.

C64+ DSP IQmath fractal benchmark:


[...] 300 iterations in 1393 millisecs.

now THAT's the DSP as I remember it :) 3.19 times faster and the framerate really shows it. That's going to be a nice graphics demo :)

EDIT: I accidentally compiled the GPP-only version with -g, the new number (4453) is with -O3. I also tried -pipe -march=armv7-a -mcpu=cortex-a8 -mtune=cortex-a8 -mfpu=neon in addition to -O3 but that did not improve the performance (about the same)
 
Last edited by a moderator:
Does this mean, the DSP can work faster with (maybe less accurate) emulated floating point operations then the cpu with hardware floating point operations (without neon)?
 
p.p.p.s.: What I said about having to reboot the Pandora each time the DSP image changes: That's not quite true. I have not rly figured out when the reboot is necessary but usually unloading the kernel module ("rmmod c64"), waiting a couple of secs, then re-running ./go64.sh after updating the DSP image seems to work most of the time.
Hmm, that sounds like something gets cached either on DSP or GPP side that should not really be.. Waiting probably gets that stuff evicted (while OS does it's housekeeping work) or something.

EDIT#2 [7:39 am]: OK, I think I pin-pointed the problem. It was a harmless looking volatile* into the shared memory (went back to the mem. gap config, did not change a thing so CMA seems to work fine). Removing the volatile attrib fixed it.
That also sounds like unwanted caching somewhere. Like you write something for DSP, but it stays in ARM's cache or write buffer and doesn't reach RAM. Removing volatile probably restructured things so that critical data goes to different cacheline that gets evicted by something later due to luck.
 
Last edited by a moderator:
@notaz: that still does not explain why the whole system would freeze (volatile).

I am having some trouble here with the cache ioctls (C64_CACHE_AC_INV and C64_CACHE_AC_WBINV in "dev.c").

Do you have an idea why calling the INV ioctl results in random system freezes ?

Sometimes it works for up to several dozen seconds, sometimes it freezes after a few frames.

What's the proper way to invalidate and writeback caches in ARM Linux ?

(and btw: I dropped you an email yesterday because I'm having problems with your SDL (crashes in SDL_Init() when driver is set to "omapdss" via env. var. (like described in your readme). Have not received a reply, yet. I really could use the hardware scaler feature for the demo.)

EDIT: things I've tried so far: the original code (borrowed from CMEM), dmac_flush_range() (passing kvirt start/end, freezes OS immediately), flush_cache_user_range() (probably the wrong call, cachelines are still visible, and it crashes, too, after a few frames. again kvirt start/end). The only thing that seems to work is flush_cache_all() but that costs ~0.4 millisec per frame (demo's running for a few minutes now). I would really like to avoid that, for obvious reasons.
 
Last edited by a moderator:
It's strange so much stuff is crashing for you, you might have some memory corruption in your module or something..

SDL_Init() with SDL_VIDEODRIVER=omapdss should not be crashing especially since it's already used in so many things on the repo.

I'm not sure what's the right cache flush function in your case, this stuff is normally not needed for stuff I do.

From a quick look, dma_sync_single_for_cpu() and dma_sync_single_for_device() functions seem to be suitable.

About your SDL question, it won't let you set DSS base address for sure, I don't even think it will allow you to get it. You could access /dev/fb1 directly (fb1 because fb0 can't scale), with that you can at least get the physical fb address, and having phys address you could access the fb from DSP I suppose?

Here is some example code how to use /dev/fb1 directly:

http://notaz.gp2x.de/cgi-bin/gitweb.cgi?p=pcsx_rearmed.git;a=blob;f=frontend/plat_omap.c;hb=HEAD#l29

omap_setup_layer_() sets up the layer on screen, you can have it almost any size and position, it just has to fit on screen. Note it's the target layer size, the source image size is set separately like this:


struct  fb_var_screeninfo fbvar;
ret = ioctl(fd, FBIOGET_VSCREENINFO, &fbvar);
fbvar.xres = w;
fbvar.yres = h;
fbvar.xres_virtual = w;
fbvar.yres_virtual = h;
fbvar.bits_per_pixel = bpp;
ret = ioctl(fd, FBIOPUT_VSCREENINFO, &fbvar);

after this you can mmap(fd, ...) it to access it from ARM.

And here is how you get the fb physical address (do it after all other setup, some ioctls can move it):

Code:
struct fb_fix_screeninfo fbfix;
ret = ioctl(fd, FBIOGET_FSCREENINFO, &fbfix);
printf("physical address: %08lx\n", fbfix.smem_start);
 
Last edited by a moderator:
Unlikely (but not impossible) that there's memory corruption in the module. There are no kmalloc() calls, for example, everything's statically allocated.

The SDL crash happens without the c64 kernel module loaded, too (wrote a very simple test app. to verify that).

Guess I'll have to get the sources from GIT and see for myself then. For now I am opening /dev/fb0 in addition to SDL (without omapdss) so I can use vsync. I can probably get access to the scaler that way, too ?!

I think I tried at least one of the dma_sync_single*() functions (sry did not mention that, it is commented out in the CMEM module, guess they found issues with it, too).

And hey, after seeing the heavy ARM CPU usage even when it was just waiting for the DSP, I just sat down and implemented support for select/poll :D

(was not that hard after all and I learnt something new).

So, CPU usage is now down to ~5-8% (from ~50% in my last release with the usleep() calls, changed that to pthread_yield() in the meantime but still saw ~25% usage). The only disadvantage is that the synthetic RPC benchmark is now ~5 times slower but if I really wanted I could still get back the original speed by polling read()s. The fastcall RPCs are still fast (no surprise here).

I also went back to the CMA config, of course. The graphics test is now running for ~10 minutes and I'll leave it on for a while and get some food (and beer, for later ;) ).

Before I forget: I tried setting the DSS GFX_BA0/1 registers directly (and I used the right register, could see that after SDL_SetVideoMode() the content changed). For some reason that did not really work. The register contained the right address but the DISPC did not read it at the next frame start (edit: during one test run it did but I do not know why it fails most of the time). Well, the proper solution would be to add a Pandora-specific IOCTL to the FB driver that allows to do such things. The ioctl could then also check if the address is in the CMA area.

And another thing: I was a bit puzzled to see that SDL apps require root privileges. That should not be necessary. Fixing the permission of the /dev/fb* devices would probably do the trick (that's at least better than allowing each app. root access to everything. I would not want an app. to kill my filesystem because of a programming mistake (I am not even talking about bad intent here, I hope no Pandora dev would even think about that ;) ).

EDIT: about your suggestion in regards to the DSP directly rendering into the regular FB memory: I did that a few years back. It works. I did not know that userspace apps can query the physical address of the FB, though. Back then (2008) I wrote a "magic" byte sequence into the FB, then searched the entire memory for that sequence to find the physical address (ugly hack, eh? it was just testcode, anyway).

EDIT#2: My original intention today was to disable all interrupts while the cacheflush is running. I am suspecting that the function is not re-entrant -- maybe some IRQ handler calls it, too. That could explain why the invalidate() crashes are so random. Notaz, do you by any chance know the easiest way to disable/enable all interrupts in Linux ? (remember, I'm a kernel newbie)
 
Last edited by a moderator:
The SDL crash happens without the c64 kernel module loaded, too (wrote a very simple test app. to verify that).
I would like to see that test app.

Guess I'll have to get the sources from GIT and see for myself then. For now I am opening /dev/fb0 in addition to SDL (without omapdss) so I can use vsync. I can probably get access to the scaler that way, too ?!
I've already explained the scaler in previous post.

And hey, after seeing the heavy ARM CPU usage even when it was just waiting for the DSP, I just sat down and implemented support for select/poll :D


(was not that hard after all and I learnt something new).


So, CPU usage is now down to ~5-8% (from ~50% in my last release with the usleep() calls, changed that to pthread_yield() in the meantime but still saw ~25% usage). The only disadvantage is that the synthetic RPC benchmark is now ~5 times slower but if I really wanted I could still get back the original speed by polling read()s. The fastcall RPCs are still fast (no surprise here).
Heh, I told you polling loops are no good for ARM CPU usage. However you could still provide polling mode as an arg or a separate lib function in case someone needs it.

Before I forget: I tried setting the DSS GFX_BA0/1 registers directly (and I used the right register, could see that after SDL_SetVideoMode() the content changed). For some reason that did not really work. The register contained the right address but the DISPC did not read it at the next frame start
That's because they're shadow registers, you have to write to some other DSS register to commit all shadow registers to hardware (there is something about a bit called GO but I don't remember the details).

And another thing: I was a bit puzzled to see that SDL apps require root privileges.
No they don't? Maybe you're using custom distro or something? Or you have created a custom user and did not update groups, by default nonprivileged user is in video group and can access the FBs (which means SDL can too).

EDIT: about your suggestion in regards to the DSP directly rendering into the regular FB memory: I did that a few years back. It works. I did not know that userspace apps can query the physical address of the FB, though. Back then (2008) I wrote a "magic" byte sequence into the FB, then searched the entire memory for that sequence to find the physical address (ugly hack, eh? it was just testcode, anyway).
So is this ok for you now, of do you still want that custom ioctl (that I'd rather avoid..)?
 
Last edited by a moderator:
Sorry for interject into this valuable discussion, but I have to can't resist, because it's just amazing to see after years of its release, the Pandora's still has hidden capabilities!  Now, do we really need a P2 soon?  I'm not sure now...
 
Last edited by a moderator:
hah, personally, I do not see the need for a P2. If I wanted to play Android games I would have gotten my an Android device. Ok, there are some things that could be improved in a P2 but it should be very similar HW-wise to the P1. But, different topic, there are enough other threads about this ;)

(EDIT: ..and I do not like X86 assembly, it's meant for compilers or ppl who like to punish themselves :) . If you want that, try coding the C64+ in assembly. But do not blaim me if that drives you up the walls. I have grown up with M68K, btw, and ARM reminds me a little bit of that (quite comfortable to use / write assembly code in). Different architecture, though, CISC vs. RISC but you know that)

@notaz: I can send you the test app and the strace log. Right before SDL_Init() fails, it tries to open /dev/tty0 (fails), then /dev/vc/0 (fails), then tries an ioctl ("VIDIO_QUERYCAP or VT_OPENQRY) (fails), then there are two further ioctls (VT_GETSTATE and KDGKBMODE) which both fail with EINVAL. Then SDL calls it quits. (EDIT: check your gmail account)

My user is in the video group, that's right and the permissions look OK.

Of course I do not have a custom distro, otherwise I would not be complaining ;)

And the scaler..totally overread that. Thanks for the info. But you also did not answer my question regarding interrupts, do you know how to do that ? if that works we could get rid of the flush_cache_all() call in the kernel module.

And finally about the polling loops: Linux is constantly evolving and 10 years ago, calling pthread_yield() actually had the effect of bringing down the CPU usage to practically 0%. Maybe this is a tickless kernel ? (however that works in detail). But as I already said: polling is always bad practice and the issue is fixed now.

About the shadow registers: I know they're shadow regs, another TI display controller I have used in the past had them, too. But it did not have the "read shadow regs now" bit you were talking of. It just read the shadow regs at the start of each frame.

Regarding the new omapfb ioctl: I have not tried to write to the fb mem, yet, but I do not expect any issues. It's just that I thought the physaddr could not be queried by apps (maybe that has changed somewhen during the last 5-6 years).
 
Last edited by a moderator:
@notaz: I can send you the test app and the strace log. Right before SDL_Init() fails, it tries to open /dev/tty0 (fails), then /dev/vc/0 (fails), then tries an ioctl ("VIDIO_QUERYCAP or VT_OPENQRY) (fails), then there are two further ioctls (VT_GETSTATE and KDGKBMODE) which both fail with EINVAL. Then SDL calls it quits.
You are running it over ssh/telnet/serial I guess? You then need a running X server and set environment variable DISPLAY=:0 , else it needs a real terminal (not virtual you get over ssh/telnet/whatever) to run without root.
But you also did not answer my question regarding interrupts, do you know how to do that ? if that works we could get rid of the flush_cache_all() call in the kernel module.
That question wasn't there yet on the version of your post I replied to B) . Try this:
unsigned long flags;
local_irq_save(flags);
...
local_irq_restore(flags);
 
Regarding the new omapfb ioctl: I have not tried to write to the fb mem, yet, but I do not expect any issues. It's just that I thought the physaddr could not be queried by apps (maybe that has changed somewhen during the last 5-6 years).
Git says that ability was added before git's history starts (2005).
 
Last edited by a moderator:
of course! I forgot about X11, I thought that SDL was just probing for X11 and would be able to work without it (via /dev/fb* and other devices). Thanks! (I know about the DISPLAY var but did not set it because I thought I would not need it).

And thanks for the info about the IRQs, I will definitely try that. I am currently working on a small FB utility module so I will try out the scaler first.

(heh, the "problem" is that you reply too fast and I have the nasty habit of posting too quickly, then adding/correcting things afterwards ;) )
 
Back
Top