Did some more experimenting today, but got stuck. The plan was to be able to run one of TI's examples, but no matter what I tried, it died on an assertion in early boot. I got the exact same issue on both TI example code (modified for OMAP5432) and their verification test code (written specifically for OMAP5432)
The error log goes something like this:
Code:
# tail -F /sys/kernel/debug/remoteproc/remoteproc1/trace0
[0][ 0.000] 16 Resource entries at 0x3000
[0][ 0.000] [t=0x00024cdf] xdc.runtime.Main: --> main:
[0][ 0.000] [t=0x0003d19b] ti.ipc.transports.TransportRpmsgSetup: TransportRpmsgSetup_attach: procId=0
[0][ 0.000] [t=0x00054c65] ti.ipc.transports.TransportRpmsg: TransportRpmsg_Instance_init: remoteProc: 0
[0][ 0.000]
[0][ 0.000] [t=0x0006ca9f] ti.ipc.rpmsg.RPMessage: --> RPMessage_init: (remoteProcId=0)
[0][ 0.000] [t=0x00082145] ti.ipc.family.omap54xx.VirtQueue: VirtQueue_init: Initialized!
[0][ 0.000]
[0][ 0.000] [t=0x000996bb] ti.ipc.family.omap54xx.VirtQueue: vring: 0 0x0 (0x3000)
[0][ 0.000]
[0][ 0.000] [t=0x000b0b47] ti.ipc.family.omap54xx.VirtQueue: VirtQueue_init: Initialized!
[0][ 0.000]
[0][ 0.000] [t=0x000c7531] ti.ipc.family.omap54xx.VirtQueue: vring: 1 0x0 (0x3000)
[0][ 0.000]
[0][ 0.000] [t=0x000dfd43] ti.ipc.family.omap54xx.VirtQueue: VirtQueue_startup: VDEV status: 0x0
[0][ 0.000]
[0][ 0.000] [t=0x000f4659] ti.ipc.family.omap54xx.VirtQueue: VirtQueue_startup: Polling VDEV status...
[0][ 0.000]
[0][ 0.000] [t=0x002066df] ti.ipc.family.omap54xx.VirtQueue: VirtQueue_startup: VDEV status: 0x7
[0][ 0.000]
[0][ 0.000] [t=0x002203af] ti.ipc.family.omap54xx.VirtQueue: Passed VirtQueue_startup
[0][ 0.000]
[0][ 0.000] [t=0x00233b35] ti.ipc.rpmsg.RPMessage: <-- RPMessage_init
[0][ 0.000] registering rpmsg-proto:rpmsg-proto service on 61 with HOST
[0][ 0.000] [t=0x0024fc5d] xdc.runtime.Main: NameMap_sendMessage: HOST 53, port=61
[0][ 0.000] [t=0x00261aeb] ti.ipc.rpmsg.RPMessage: --> RPMessage_send: (dstProc=0, dstEndpt=53, srcEndpt=61, data=0x8006df64, len=72
[0][ 0.000] [t=0x0027e997] ti.ipc.family.omap54xx.VirtQueue: getAvailBuf vq: 0x800610d8 0 0 256 0x800610e8 0x1000
[0][ 0.000]
[0][ 0.000] [t=0x0029acc7] ti.sysbios.knl.Semaphore: ERROR: line 202: assertion failure: A_badContext: bad calling context. Must be called from a Task.
[0][ 0.000] ti.sysbios.knl.Semaphore: line 202: assertion failure: A_badContext: bad calling context. Must be called from a Task.
[0][ 0.000] xdc.runtime.Error.raise: terminating execution
From what I've been able to figure out by crawling through the IPC code base is that this is the piece of code that is triggering the assert (in
ipc_3_40_01_08/packages/ti/ipc/rpmsg/RPMessage.c)
C:
/* Send to remote processor: */
do {
token = VirtQueue_getAvailBuf(transport.virtQueue_toHost,
(Void **)&msg, &length);
} while (token < 0 && Semaphore_pend(transport.semHandle_toHost,
BIOS_WAIT_FOREVER));
I don't think there's a problem with this code in itself. I believe there's a configuration issue somewhere. I've analyzed the VirtQueue code a bit and, from the logs above, it looks like it expects to find its virtio vring queue at address 0x0! From
ipc_3_40_01_08/packages/ti/ipc/family/omap54xx/VirtQueue.c:
C:
switch (vq->id) {
/* IPC transport vrings */
case ID_SELF_TO_HOST:
case ID_HOST_TO_SELF:
vq->basePa = (UInt32)Resource_getVringDA(vq->id);
Assert_isTrue(vq->basePa != NULL, NULL);
result = Resource_physToVirt(vq->basePa, &(vq->baseVa));
Assert_isTrue(result == Resource_S_SUCCESS, (Assert_Id)NULL);
vringAddr = (Void *)vq->baseVa;
break;
default:
GateHwi_delete(&vq->gateH);
Memory_free(NULL, vq, sizeof(VirtQueue_Object));
return (NULL);
}
Log_print3(Diags_USER1,
"vring: %d 0x%x (0x%x)\n", vq->id, (IArg)vringAddr,
RP_MSG_RING_SIZE);
These
Resource_*()
functions are for parsing the resource table I've been talking about in my earlier posts. I did a quick experiment and switched
Resource_physToVirt()
with
Resource_virtToPhys()
and it got me a bit further, but eventually hit another assert. Not sure if that's a correct fix or I just got lucky that time. There might have been some changes on the Linux side that made this old IPC version incompatible.
I'll have to dig deeper another day. I'm off now for another week
Just wanted to write down some of the progress at least.
PS.
The TI code infrastructure is a beast! You've got C code generation based on java classes and templates, with some kind of homemade inheritance written/generated in C! Makes following the code a real pain
DS.