Tests, tests, and more tests.


thanks for telling ed!
i really appreceate this transparent development with all it's up's and down's.
next we will have a cyclic swing to the "up", i'm sure! :)

*go pyra-hardware-team, you will finally make it!*
 
It's a pain how ARM protect their Application Reference Manuals (ARMs). I've found some descriptions of LPAE in the TRMs for the A15, but the detail is in the ARMv7-A ARM apparently, and while I've got a copy of the ARMv6-A ARM from back in the day, it seems you need to sign up for all and sundry these days to get hold of it.
 
it's a bit like looking for the needle in the hay stack.

a needle in the linux stack/kernel perhaps? sounds like a software issue, especially if things are working reliably for an old kernel. does the old kernel also freeze on reboots?
 
Since it's happening with the 2GB boards as well (although less frequent), it seems it's an issue that has to be resolved. ie going to 2GB is still going to be problematic unless the issue is fixed
 
Idea... Likely worth every penny of free.

Simplify and compartmentalize.

Disconnect everything but the core essentials for the SoC board to run. Test. If works, add one item. Repeat.

Likely already being tried.
 
Given that the problem reported is a lock-up after a random time-period since power-on, my first thought is a race-condition that prevents the CPU reading the next instruction from memory. As the Pyra uses an ARM cpu - ARM Cortex A15, which is an ARM 7-A core, I did a swift Internet search for "ARM 7A MMU race lockup" to get this page in a discussion forum: h tt ps:// w w w. embeddedrelated. com /showthread /comp.arch.embedded /187707-1.php [remove spaces to get correct url - as a new poster, I think I'm prevented from posting urls]

I don't know if it is relevant:

"A specific example: if you are experiencing device lockups when enabling the MMU, try changing the device type attributes in the paging table for the peripheral region."

It is indeed beyond my competence to say whether the thread on that forum is relevant, but it makes for interesting reading. If nothing else, it underscores that mishandling of interrupts can cause lockups. Of course, my lack of knowledge in this area may mean this is entirely irrelevant.

Regards,

Confuzzled
 
An Idea is...
..maybe Rays from Wlan or Bluetooth or 3G/LTE Modem are fault?

Maybe you can try Pyra Stresstests without activated Wlan?

Just an Idea
 
Well. That sounds like a proper mystery. By your description it does seem like some sort of "buildup" that triggers the effect, be it something electric like a capacitor charging or a zener diode passing its threshold, or software like a stack pointer running off the deep end or kernel driver flipping the wrong bit in hardware. Whatever the solution I expect to fall in either "AAAGH why didn't we see this!?" or "Well, no wonder we didn't think of this." category :D.

Best of luck in finding the gremlin!
 
After reading up on it the other day, I wonder how well the simulation simulated all of the capacitance effects caused by current passing between a power or signal line and ground with an insulator (the board) in between. But still, what I can't explain with any of these issues is why the CPU just freezes, unless perhaps it's the clock that goes awry (but I assume that's internal to the SoC in these sorts of chips, so really should either work pretty much forever or not at all). I'd expect the linux kernel to be running in a pretty tight loop in one cache or another, so be able to spit out a 'memory error' kind of a message.

Actually I wonder if the caches need considering. Perhaps the active code in the kernel and the memtester can all fit inside a cache, so when you're testing that configuration, you're not actually hitting the ram at all. Once you have a kernel with all kinds of modules inserted, some of those will probably need to live off cache, and it'll need to refill the cache from ram. If that's going awry, perhaps because of RAM timings, or something esoteric in the MMU config, it could be hard for the kernel to report before hitting a duff instruction, and especially if the exception table is out of cache at present, it could have trouble bringing that back in from RAM. I don't know off hand if zero page (where the exception vector tables are normally stored) is protected in some way.
 
After reading up on it the other day, I wonder how well the simulation simulated all of the capacitance effects caused by current passing between a power or signal line and ground with an insulator (the board) in between.

Pretty good, these simulations are used for all modern systems these days.

Actually I wonder if the caches need considering. Perhaps the active code in the kernel and the memtester can all fit inside a cache, so when you're testing that configuration, you're not actually hitting the ram at all. Once you have a kernel with all kinds of modules inserted, some of those will probably need to live off cache, and it'll need to refill the cache from ram. If that's going awry, perhaps because of RAM timings, or something esoteric in the MMU config, it could be hard for the kernel to report before hitting a duff instruction, and especially if the exception table is out of cache at present, it could have trouble bringing that back in from RAM. I don't know off hand if zero page (where the exception vector tables are normally stored) is protected in some way.

Well, when the unit is sitting around idling, it also doesn't do much to the memory.
And it freezes randomly - up until it freezes, you can do anything, even run memory-intense programs or compile.
 
7. 4GB RAM boards were working with only 2GB RAM enabled (as far as I remember - as that was the reason I ordered the RAM modules back then!)

Well, if that's true, It's pretty clear for me what to do.
Enable only 2 GB and ship them. As long as these are stable it's fine.
Then you have money and can sleep well until someone figures out what's the issue.

I wonder why no one else mentioned that. It's obvious.
 
Hi there ED.

If I may ask, how far does the system boot until it freezes?

Do the freezes happen only when you run an OS with graphic UI?
Or do these also happen with a terminal based OS (e.g. arch-linux)?

If its the former case, the reason might be a faulty gpu driver (or something graphic related), that causes the lockup, since the system make use of the gpu at this point.

From my recent experience (even though on a regular pc) this happened to me as well, while I was trying to run a game through wine.
It also caused total lockups with a 100% failure rate, even though the time till the lockup was always varying from 1 to 20 minutes.
In my case it is most likely related to the radeon driver that I use with an AMD graphic card.

Is the graphic driver used for the pyra an open source driver or a proprietary blob?
If it is a blob, it might not see much changes over several kernel releases, so even an older kernel may cause this problem?

Anyway, it may or may not help here, but just in case I post this info.
 
They've been playing with kernels without the GPU driver even being enabled. The GPU driver has worked on 2GB systems, well as well as it can with it's Xorg issues with rotation an such.

That said, the driver uses an open source kernel module with a proprietary blob.
 
Well, when the unit is sitting around idling, it also doesn't do much to the memory.
And it freezes randomly - up until it freezes, you can do anything, even run memory-intense programs or compile.

Is there any chance that this is unrelated to the SoC or the RAM themselves? I.e. could there be a capacitor or voltage regulator in the power circuit somewhere that works until certain conditions are met (temp? current high/low?). Maybe you got a bad batch of -a- component that isn't working within spec X% of the time?

I know that there was a lot of talk about whether or not to use Tantalum capacitors - could be completely unrelated of course.

A lot of the features on these boards are -really- tiny. Finding the slightly bent needle in the needle stack could be frustrating - if there even is one.
 
Well, if that's true, It's pretty clear for me what to do.
Enable only 2 GB and ship them. As long as these are stable it's fine.
Then you have money and can sleep well until someone figures out what's the issue.

I wonder why no one else mentioned that. It's obvious.

I've asked that question on the Kernel Mailing List.
Nikolaus told me they did try it several times without success.
No luck ...
 
Back
Top