Couldn't wait for my Pyra, so I bought a OMAP5432 devboard


hns

Well-Known Member
Joined
Dec 4, 2011
Messages
596
Location
Oberhaching
This is my main.c code, with some include statements and comments removed for brevity:

I think my next step is to either get an RPMsg demo up and running or start poking the TWL6037 PMIC. Not sure which one is easiest :) Maybe I'll start with simply flipping a GPIO pin?
Maybe you could try to send I2C commands to the LED controllers and make them fancy blinking :)
Or read out the Fuel Gauge chip.
What I have no idea is how to prevent the kernel on the A15 cores from trying the same at the same moment...
 

Risca

Active Member
Joined
Sep 25, 2011
Messages
37
Maybe you could try to send I2C commands to the LED controllers and make them fancy blinking :)
Or read out the Fuel Gauge chip.
What I have no idea is how to prevent the kernel on the A15 cores from trying the same at the same moment...
The OMAP4 introduce some kind of HW spinlock device for synchronization between processors, bit I haven't looked into it much. This article seem to indicate that the I2C driver is already using this synchronization: https://lwn.net/Articles/425638/
 

Risca

Active Member
Joined
Sep 25, 2011
Messages
37
Did some more experimenting today, but got stuck. The plan was to be able to run one of TI's examples, but no matter what I tried, it died on an assertion in early boot. I got the exact same issue on both TI example code (modified for OMAP5432) and their verification test code (written specifically for OMAP5432) :mad:

The error log goes something like this:
Code:
# tail -F /sys/kernel/debug/remoteproc/remoteproc1/trace0
[0][      0.000] 16 Resource entries at 0x3000
[0][      0.000] [t=0x00024cdf] xdc.runtime.Main: --> main:
[0][      0.000] [t=0x0003d19b] ti.ipc.transports.TransportRpmsgSetup: TransportRpmsgSetup_attach: procId=0
[0][      0.000] [t=0x00054c65] ti.ipc.transports.TransportRpmsg: TransportRpmsg_Instance_init: remoteProc: 0
[0][      0.000]
[0][      0.000] [t=0x0006ca9f] ti.ipc.rpmsg.RPMessage: --> RPMessage_init: (remoteProcId=0)
[0][      0.000] [t=0x00082145] ti.ipc.family.omap54xx.VirtQueue: VirtQueue_init: Initialized!
[0][      0.000]
[0][      0.000] [t=0x000996bb] ti.ipc.family.omap54xx.VirtQueue: vring: 0 0x0 (0x3000)
[0][      0.000]
[0][      0.000] [t=0x000b0b47] ti.ipc.family.omap54xx.VirtQueue: VirtQueue_init: Initialized!
[0][      0.000]
[0][      0.000] [t=0x000c7531] ti.ipc.family.omap54xx.VirtQueue: vring: 1 0x0 (0x3000)
[0][      0.000]
[0][      0.000] [t=0x000dfd43] ti.ipc.family.omap54xx.VirtQueue: VirtQueue_startup: VDEV status: 0x0
[0][      0.000]
[0][      0.000] [t=0x000f4659] ti.ipc.family.omap54xx.VirtQueue: VirtQueue_startup: Polling VDEV status...
[0][      0.000]
[0][      0.000] [t=0x002066df] ti.ipc.family.omap54xx.VirtQueue: VirtQueue_startup: VDEV status: 0x7
[0][      0.000]
[0][      0.000] [t=0x002203af] ti.ipc.family.omap54xx.VirtQueue: Passed VirtQueue_startup
[0][      0.000]
[0][      0.000] [t=0x00233b35] ti.ipc.rpmsg.RPMessage: <-- RPMessage_init
[0][      0.000] registering rpmsg-proto:rpmsg-proto service on 61 with HOST
[0][      0.000] [t=0x0024fc5d] xdc.runtime.Main: NameMap_sendMessage: HOST 53, port=61
[0][      0.000] [t=0x00261aeb] ti.ipc.rpmsg.RPMessage: --> RPMessage_send: (dstProc=0, dstEndpt=53, srcEndpt=61, data=0x8006df64, len=72
[0][      0.000] [t=0x0027e997] ti.ipc.family.omap54xx.VirtQueue: getAvailBuf vq: 0x800610d8 0 0 256 0x800610e8 0x1000
[0][      0.000]
[0][      0.000] [t=0x0029acc7] ti.sysbios.knl.Semaphore: ERROR: line 202: assertion failure: A_badContext: bad calling context. Must be called from a Task.
[0][      0.000] ti.sysbios.knl.Semaphore: line 202: assertion failure: A_badContext: bad calling context. Must be called from a Task.
[0][      0.000] xdc.runtime.Error.raise: terminating execution

From what I've been able to figure out by crawling through the IPC code base is that this is the piece of code that is triggering the assert (in ipc_3_40_01_08/packages/ti/ipc/rpmsg/RPMessage.c)
C:
/* Send to remote processor: */
do {
    token = VirtQueue_getAvailBuf(transport.virtQueue_toHost,
            (Void **)&msg, &length);
} while (token < 0 && Semaphore_pend(transport.semHandle_toHost,
                                     BIOS_WAIT_FOREVER));

I don't think there's a problem with this code in itself. I believe there's a configuration issue somewhere. I've analyzed the VirtQueue code a bit and, from the logs above, it looks like it expects to find its virtio vring queue at address 0x0! From ipc_3_40_01_08/packages/ti/ipc/family/omap54xx/VirtQueue.c:
C:
switch (vq->id) {
    /* IPC transport vrings */
    case ID_SELF_TO_HOST:
    case ID_HOST_TO_SELF:
        vq->basePa = (UInt32)Resource_getVringDA(vq->id);
        Assert_isTrue(vq->basePa != NULL, NULL);

        result = Resource_physToVirt(vq->basePa, &(vq->baseVa));
        Assert_isTrue(result == Resource_S_SUCCESS, (Assert_Id)NULL);

        vringAddr = (Void *)vq->baseVa;
        break;
    default:
        GateHwi_delete(&vq->gateH);
        Memory_free(NULL, vq, sizeof(VirtQueue_Object));
        return (NULL);
}   

Log_print3(Diags_USER1,
        "vring: %d 0x%x (0x%x)\n", vq->id, (IArg)vringAddr,
        RP_MSG_RING_SIZE);
These Resource_*() functions are for parsing the resource table I've been talking about in my earlier posts. I did a quick experiment and switched Resource_physToVirt() with Resource_virtToPhys() and it got me a bit further, but eventually hit another assert. Not sure if that's a correct fix or I just got lucky that time. There might have been some changes on the Linux side that made this old IPC version incompatible.

I'll have to dig deeper another day. I'm off now for another week :) Just wanted to write down some of the progress at least.

PS.
The TI code infrastructure is a beast! You've got C code generation based on java classes and templates, with some kind of homemade inheritance written/generated in C! Makes following the code a real pain :mad:
DS.
 
Last edited:

Risca

Active Member
Joined
Sep 25, 2011
Messages
37
Okay, did some last minute testing before vacation travels. Looks like the ping_rpmsg test firmware doesn't crash. However, now I'm stuck fixing their userspace application instead. Their build system didn't pick up the headers from my kernel and got the wrong value for AF_RPMSG. Luckily, their build system also allowed to override the value at build time :) That got me past the socket() call, right into a failed connect() xD

Now I really have to get moving if I wanna catch my train! See you in a week or so :)
 

Risca

Active Member
Joined
Sep 25, 2011
Messages
37
Well this is almost embarrassing o_O how could this have gone through QA?

Here's a patch that fixes their MessageQ example code:
Diff:
diff -Naur ipc_3_40_01_08/packages/ti/ipc/family/omap54xx/VirtQueue.c{.orig,}
--- ipc_3_40_01_08/packages/ti/ipc/family/omap54xx/VirtQueue.c.orig    2022-07-25 22:44:49.029767723 +0200
+++ ipc_3_40_01_08/packages/ti/ipc/family/omap54xx/VirtQueue.c    2022-07-25 23:15:06.877920740 +0200
@@ -448,10 +448,10 @@
         /* IPC transport vrings */
         case ID_SELF_TO_HOST:
         case ID_HOST_TO_SELF:
-            vq->basePa = (UInt32)Resource_getVringDA(vq->id);
-            Assert_isTrue(vq->basePa != NULL, NULL);
+            vq->baseVa = (UInt32)Resource_getVringDA(vq->id);
+            Assert_isTrue(vq->baseVa != NULL, NULL);
 
-            result = Resource_physToVirt(vq->basePa, &(vq->baseVa));
+            result = Resource_virtToPhys(vq->baseVa, &(vq->basePa));
             Assert_isTrue(result == Resource_S_SUCCESS, (Assert_Id)NULL);
 
             vringAddr = (Void *)vq->baseVa;

I'll continue with a more scaled down, pure RPMSG example next. But that'll have to wait a few days - maybe a week
 
Top