[announce] c64_tools (DSP loader and IPC)


Yes the GPP and DSP address spaces match, since they are both sit on L3 (main) bus. The only difference is, like bsp said, that DSP has it's peripherals and caches overlaid at 00000000-10ffffff (GPP sees GPMC there).

About corruption, I can think of these possible reasons:

- DSP bringup code touches something that affects NAND/SD controllers, perhaps some shared PLLs or something?

- DSP or GPP accidentally writes to 00000000-3fffffff, where GPMC (which controls NAND) is mapped

Since you can trigger the corruption quite easily, it would be interesting to know if starting the DSP multiple times (run then rmmod c64) can trigger the issue. And yes closing the lid may have effect on all this since this turns off the display controller and various clocks are turned off along with it, so maybe something DSP needs is turned off too and that causes it go crazy.
 
Last edited by a moderator:
@notaz:

You can rule out "DSP accidentally writes to GPMC/NAND" since that memory area is simply not accessible for the DSP resp. it has a different meaning (DSP sees its regs/SRAM there).

The PLL you program for setting up the DSP clock is not shared by anything else, at least not according to the docs.

The DSP startup code / kernel module does not access any other PLLs or registers related to NAND.

Closing the LID should not have had any effect on the DSP since it most likely was not even running/powered on by that time.

Since this is not a systematic failure and only affects two units so far, I'd say that this issue is caused by a hardware defect. Such things happen.


@M-HT:

Seems you are right about the -mv option. The compiler frontend seems to handle it correctly, nevertheless (no unwanted map files are created). I moved the option to appear before -z and the resulting binaries are exactly the same.

The phys_addr field definitely stores the GPP physical address. dsp_physgpp_to_physdsp() is required only for the SRAM areas since GPP/DSP see them at different addresses.

As for your NAND issues: Sounds to me like everytime the DSP is started on your unit, the NAND gets corrupted. Maybe ED can help you out with a new mainboard for a discount/developer fee..?
 
Seems you are right about the -mv option. The compiler frontend seems to handle it correctly, nevertheless (no unwanted map files are created). I moved the option to appear before -z and the resulting binaries are exactly the same.
I think no unwanted map files are created, because you are overriding that option with another map file.

As for your NAND issues: Sounds to me like everytime the DSP is started on your unit, the NAND gets corrupted. Maybe ED can help you out with a new mainboard for a discount/developer fee..?
I don't think it's everytime. I'm keeping my Pandora for now.

The phys_addr field definitely stores the GPP physical address. dsp_physgpp_to_physdsp() is required only for the SRAM areas since GPP/DSP see them at different addresses.
After reading the source code, you are right.


That means, in my tests I was using GPP physical addresses on the DSP side. It was working - maybe the data was going through L3 interconnect instead of directly. I'll have to retest using DSP physical addresses.


That also means the documentation is misleading.


dsp_virt_to_phys() doesn't translate virtual GPP addresses to physical DSP addresses, but to physical GPP addresses.


Likewise dsp_phys_to_virt() doesn't translate physical DSP addresses to virtual GPP addresses, but it translates physical GPP addresses to virtual GPP addresses.


And of course dsp_mem_region_t.phys_addr is not DSP physical address, but GPP physical address.

Edit:

Also, it's probably the reason why my IDMA test wasn't working.
 
Last edited by a moderator:
@notaz:


You can rule out "DSP accidentally writes to GPMC/NAND" since that memory area is simply not accessible for the DSP resp. it has a different meaning (DSP sees its regs/SRAM there).
Not really, like I wrote above DSP area spans 00000000-10ffffff and GPMC is at 00000000-3fff'ffff, so you can still hit it from DSP.

Could we have automatic stress test mode in c64_tc, that would cycle tests repeatedly and stop if it detects errors? Or at least make it return non-0 in such cases so that it can be scripted, now it seems to only be possible to check for errors by parsing the logs. I'd run it overnight on my CC and see if I can get the corruption issue.

Edit: corrected GPMC end mask
 
Last edited by a moderator:
Not really, like I wrote above DSP area spans 00000000-10ffffff and GPMC is at 00000000-3ffffff, so you can still hit it from DSP.
..but writing to NAND is not as easy as writing to RAM since it is not memory mapped, right ?


I took a quick look at the NAND driver (e.g. drivers/mtd/nand.c) and how it's setup for the Pandora. NAND access seems to be done via DMA+GPMC prefetch engine. Reads/writes go through some kind of 4 byte port register (IO_ADDR_R/IO_ADDR_W) (@0x01000000?). The driver resp. the NAND protocol / setup needed before any data can be written or read seems quite complex. Seems very unlikely to me that this is going to happen by accident.


Anyways, regarding the automatic stresstest: Here it is. The tests were still running when I got home, i.e. no error so far (that's ten-thousands of testruns now).


I've attached the binary+source. Make sure to run "/usr/pandora/scripts/op_hugetlb.sh 16" and copy the updated demo_checksum*.out files before you start the test via


./c64_tc 2 49 0
first arg is start_tc_idx, 2nd one is end_tc_idx, third one is number of cycles (0=infinite).
 

Attachments

  • c64_tools-stresstest-10Jan2014.tar.gz
    47.3 KB · Views: 264
Last edited by a moderator:
Not really, like I wrote above DSP area spans 00000000-10ffffff and GPMC is at 00000000-3ffffff, so you can still hit it from DSP.
..but writing to NAND is not as easy as writing to RAM since it is not memory mapped, right ?

I took a quick look at the NAND driver (e.g. drivers/mtd/nand.c) and how it's setup for the Pandora. NAND access seems to be done via DMA+GPMC prefetch engine. Reads/writes go through some kind of 4 byte port register (IO_ADDR_R/IO_ADDR_W) (@0x01000000?). The driver resp. the NAND protocol / setup needed before any data can be written or read seems quite complex. Seems very unlikely to me that this is going to happen by accident.
Yeah, but those IO_ADDR_* are described in the manual as "This register is not a true register, just an address location" (see GPMC_NAND_ADDRESS_i), so I guess writing to IO_ADDR_W just sends your write somewhere to 00000000-3fff'ffff region. So let's say NAND access is in progress through DMA/prefetch/whatever, and accidentally DSP writes to above region which is GPMC space and corruption happens.
Anyways, regarding the automatic stresstest: Here it is.
Thanks, will try to run it sometime later.
 
I redid my tests with L1DSRAM and L2SRAM, this time using DSP physical addresses and not GPP physical addresses.

The result is that calling cache_wb() or cache_inv() is not needed on the GPP side. That goes for both standard call and fastcall.

My IDMA test also worked.

And again, one time when testing L2SRAM I had wrong results.

And again, one time I lost xfce menu/icons and had to restart. This time I didn't need to reflash.

Maybe the NAND corruption occurs when I restart my Pandora and not before ?
 
Yes, the cache* calls are definitely not needed when using L1 or L2 to pass data, neither on GPP, nor on DSP side.

Regarding L2 and wrong results: Could you please run the 'c64_tc' app in stresstest mode over night ? (see attachment to post #526)
The cmdline would be './c64_tc 46 49 0' (for L2SRAM tests). The app aborts when a testcase fails (e.g. checksum mismatch).

About NAND and restart: Good question -- I don't know how you use your Pandora but I practically never reboot mine (I just suspend it).

If you handle this the same, you could try a reboot stresstest, just to see whether you can reproduce the NAND issue this way. You could e.g. add an 'rc' script that checks whether a certain file exists on SD1 and in that case reboots the device after sleeping for a minute or so.
 
That didn't even take a minute.


[...] stresstest FAILED after numTestCycles=2 totalNumTests=10

___

I don't suspend my Pandora. I turn it on when I need it and turn it off afterward.
 
That doesn't look good.

The same tests have been running for ~20 hours now on one of my 3730 units, w/o any error so far.

Well, let's see what the result on notaz' 3530 unit will look like.
 
Fails too, but had to wait for a while:


[---] loc_tc_l2sram_rand_chksum_dsp: checksum MISMATCH (iter=1, GPP.chk=0x29a2bec6 DSP.chk=0xddf20000)
[---] loc_tc_l2sram_rand_chksum_dsp: FAILED after 1 iterations.
[---] test_cycle_idx=24, test_case_idx=46 ("TC_L2SRAM_RAND_CHKSUM_DSP") **FAILED**.
[---] stresstest aborted due to error. test_cycle_idx=24, tcIdx=46 ("TC_L2SRAM_RAND_CHKSUM_DSP").
[...] stresstest FAILED after numTestCycles=24 totalNumTests=97

Tested in freshly reflashed system - finished first boot wizard and ran the test from SD card. Looked around and everything seems to be normal, DSP ran at 430MHz, VDD1 voltage was 1.35V, no errors in dmesg. No corruption though.

Edit: DSP.chk looks not very checksum-ish, could there be some race condition?
 
Last edited by a moderator:
What makes you think that the wrong checksum is caused by a race condition ? I couldn't think of any.

Just for the record, could you please run the tests on your 3730 unit ? (the tests on mine are still running fine..23+ hours and counting..)

To me it seems like there are a number of HW 'lemons' around -- apparently it's the older units which are affected by this.

OTOH, the L2D caches on my units never worked properly -- at least the rest did.

As far as I am concerned, I am not going to release any more SW for this since I don't want to waste any more time supporting this kind of sporadic failures.

If anyone thinks this is due to SW / programming errors, feel free to review the code and post a patch.
 
Ran it again on 3530 and it locked up the system on test_cycle_idx=52, testcase=49

It would be good to think of a way how to debug this..

What makes you think that the wrong checksum is caused by a race condition ? I couldn't think of any.
I'm just throwing random ideas around, feel free to ignore. The value doesn't look like calculation result, it has to come from somewhere..
Edit: I can confirm NAND problems, just poking random files while stress test is running will eventually end up with all NAND transfers starting to fail, ubifs threads being killed and rootfs becoming inaccessible (this is on 3530). No other information in dmesg, only sudden stream of ECC errors that never recover and system reset is needed.
 
Last edited by a moderator:
Does this happen by "normal" dsp use? Means the DSP gets 1 MB fixed RAM, decoded the data the data in the first 512 KB, and writes to the second 512 KB?

Or does it only happen with some exotic test like use all avaible RAM with the dsp? Does this happen on a rebirth too?
 
Ok I've left 1GHz overnight + some more (>12h) and it survived fine. Couldn't provoke NAND problems either by doing the same things that cause CC to die. I've even ran firefox to eat all RAM, everything was still fine.

In fact, I've found easy way to kill CC - just run TC_TESTMSG in a loop, and it dies in a second or 2. Same for TC_RPC_ADD. 3730 is fine with that, of course.

Does this happen by "normal" dsp use? Means the DSP gets 1 MB fixed RAM, decoded the data the data in the first 512 KB, and writes to the second 512 KB?

Or does it only happen with some exotic test like use all avaible RAM with the dsp? Does this happen on a rebirth too?
So yes it happens even on the simplest test. Magic Sam said he has rebirth and is also affected.
 
I've commented out dsp_close() in c64_tc.c and all problems seemed to go away, TC_TESTMSG is stable, not so sure about './c64_tc 46 49 0' but it's running fine over a hour now. So something bad is going on during dsp_open/dsp_close on 3530.

I've looked a bit at the code and there are some things I recommend to change, for example not hitting the clock registers directly, but using kernel APIs, like instead of

/* Enable IVA2_CLK (set bit 1, bits 1..31 are reserved) */
reg32_write(IVA2_CM_FCLKEN, 1);

do

#include <linux/clk.h>
// on init
struct clk *iva2_clk = clk_get(dev, "iva2_ck");
 
clk_enable(iva2_clk);
 
// on unload
clk_put(iva2_clk);

This is better because kernel knows about clock dependencies, and more importantly about clock enable delays and various erratas.

Looking at your reset code and comparing it to dspbridge ( https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/staging/tidspbridge/core/tiomap3430.c#n264 ), you also don't wait for power transitions to finish.

If you don't want to work on this due to missing hardware (or whatever), I could try to mess with this sometime later.
 
Ok, thanks for testing. Good to hear that it's working fine on your 3730 unit, too. I canceled the stresstest on mine after it ran for ~2 days.

Maybe the older OMAPs don't like frequent DSP power cycling ? the DSP is powered off after each testcase and immediately powered on when the next one starts.

When I implemented resp. tested and benchmarked the dsp_suspend() / dsp_resume() API calls, I measured some considerable power spikes (via _wb_'s sysinfo). _Maybe_ these can have unexpected side-effects on other SoC HW blocks, like NAND/GPMC (wild guess, of course).

OTOH, when M-HT did his tests, he probably didn't do any kind of frequent power cycling.

Since you already found out that removing the dsp_close() call seems to improve the situation, could you please comment out line 933 in kmod/dev.c (dsp_poweroff()) and re-run the stresstest on your 3530 ?

Maybe we can pinpoint the issue to either the power on, or the power off sequence.

(and yes, I can't test/debug this myself since I don't have a CC/Rebirth Pandora)

Regarding power state transitions: I added a wait-loop for the 'on' state transition, please see the source attachment.

For the 'off' state, there's no wait-loop, the power state is simply checked, like in previous versions. The transitions seem to be quite fast, that's why I skipped the wait loop(s), although I do remember that when I considerably decreased the udelay() after the OFF transition, setting it to <100 caused 'c64_pwrbench' to fail (not critically, though, i.e. not a system freeze/reset).

Regarding the Linux clk_*() calls: Do you know where I can find the implementation for the clk_enable(iva2_clk) call ? Just to see whether it does anything else than just setting that bit.
 

dsp_c64.c.txt
 

Attachments

  • dsp_c64.c.txt
    13.2 KB · Views: 448
Since you already found out that removing the dsp_close() call seems to improve the situation, could you please comment out line 933 in kmod/dev.c (dsp_poweroff()) and re-run the stresstest on your 3530 ?
I'll try that later today.

Regarding the Linux clk_*() calls: Do you know where I can find the implementation for the clk_enable(iva2_clk) call ? Just to see whether it does anything else than just setting that bit.
It's probably this, which references probably this, which references probably this.
That's OMAP1, OMAP3 is heavily abstracted with lots of callbacks and things. It should be this structure and this enable function, but there might be more things going on like parent enabling..


I would still strongly recommend using clk_* API, not just to make it cleaner and to have more consistent kernel state (you can see state of all clocks state from debugfs), but also for easier porting to future OMAPs.
 
Back
Top