And what do you think about this?
root@letux:~# echo $(cat /sys/firmware/devicetree/base/model) Pyra-Handheld-V5.1 root@letux:~# cpufreq-info cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to cpufreq@vger.kernel.org, please. analyzing CPU 0: driver: cpufreq-dt CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 0 1 maximum transition latency: 400 us. hardware limits: 250 MHz - 1.70 GHz available frequency steps: 250 MHz, 500 MHz, 750 MHz, 1000 MHz, 1.25 GHz, 1.50 GHz, 1.70 GHz available cpufreq governors: conservative, userspace, powersave, ondemand, performance current policy: frequency should be within 250 MHz and 1.70 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 500 MHz (asserted by call to hardware). cpufreq stats: 250 MHz:22.26%, 500 MHz:30.90%, 750 MHz:15.35%, 1000 MHz:13.11%, 1.25 GHz:7.68%, 1.50 GHz:3.79%, 1.70 GHz:6.90% (96) analyzing CPU 1: driver: cpufreq-dt CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 0 1 maximum transition latency: 400 us. hardware limits: 250 MHz - 1.70 GHz available frequency steps: 250 MHz, 500 MHz, 750 MHz, 1000 MHz, 1.25 GHz, 1.50 GHz, 1.70 GHz available cpufreq governors: conservative, userspace, powersave, ondemand, performance current policy: frequency should be within 250 MHz and 1.70 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 500 MHz (asserted by call to hardware). cpufreq stats: 250 MHz:22.26%, 500 MHz:30.91%, 750 MHz:15.34%, 1000 MHz:13.11%, 1.25 GHz:7.68%, 1.50 GHz:3.79%, 1.70 GHz:6.90% (96) root@letux:~# ./high-load -n 100% load stress test for 2 cores Sat Jan 1 06:07:22 UTC 2000 44° 45° 44° 29° 4036mV 1000MHz Sat Jan 1 06:07:22 UTC 2000 44° 45° 44° 29° 3992mV 1000MHz Sat Jan 1 06:07:23 UTC 2000 44° 45° 45° 29° 3870mV 1700MHz Sat Jan 1 06:07:25 UTC 2000 47° 45° 45° 29° 3859mV 1700MHz Sat Jan 1 06:07:26 UTC 2000 67° 52° 53° 29° 3862mV 1700MHz Sat Jan 1 06:07:27 UTC 2000 75° 58° 59° 31° 3857mV 1700MHz Sat Jan 1 06:07:29 UTC 2000 79° 65° 63° 31° 3827mV 1700MHz Sat Jan 1 06:07:30 UTC 2000 85° 68° 68° 31° 3843mV 1700MHz Sat Jan 1 06:07:32 UTC 2000 87° 72° 71° 31° 3840mV 1700MHz Sat Jan 1 06:07:33 UTC 2000 90° 74° 75° 32° 3830mV 1700MHz Sat Jan 1 06:07:35 UTC 2000 90° 74° 75° 32° 3818mV 1700MHz Sat Jan 1 06:07:36 UTC 2000 93° 79° 76° 32° 3808mV 1700MHz Sat Jan 1 06:07:38 UTC 2000 95° 81° 78° 32° 3799mV 1700MHz Sat Jan 1 06:07:39 UTC 2000 98° 83° 80° 33° 3811mV 1700MHz Sat Jan 1 06:07:41 UTC 2000 100° 84° 83° 33° 3974mV 1000MHz Sat Jan 1 06:07:42 UTC 2000 103° 86° 84° 33° 4034mV 250MHz Sat Jan 1 06:07:45 UTC 2000 81° 82° 77° 31° 3826mV 1700MHz Sat Jan 1 06:07:46 UTC 2000 73° 74° 72° 31° 3805mV 1700MHz ^Ckill 2659 2660
root@letux:~#
A realistic explanation is that our DDR3 is a little more sensitive to noise by 1.5GHz operation than the EVM and hence we get the problems but don't see them on the EVM if it is more robust.
I had ruled out the DDR3 during my tests in November because the EMIF runs at the same speed, independently of the ARM core clock.
Therefore I thought there can not be an influence or problem. But if higher ARM load makes more noise on the DDR3 wires as well...
BR, Nikolaus