Overclocking? Linear increase?


Halcyon

Member
Joined
May 31, 2011
Messages
286
Age
41
Location
Cyberia (Wels Austria)
Just a thought came to mind the other day as I was over clocking for my PSX emulator. So forgive me, im going to have to use some rather subjective terms because im just not scientific enough when it comes to this stuff :(


So as far as I know overclocking is changing the frequency of (i guess the cpu) and a high frequency equates to being able to run more complicated code or programs faster. So if that is true (I think at least in a loose sense it is) If i had a program that counted to 100 and required exactly 100sec to do it at 1hz would it be able to count to 100 in 50sec at 2hz?


So if the above is true it would seem that there is a linear relationship between time to complete the program and frequency of the cpu.


t= 100/hz


So when i was a teenager i remember processors seemed to only come out in values like 666Mhz (one of my favorites :) ) or 733Mhz I never saw anything like 420Mhz or 69Mhz is this mainly an industry convention or are there really significant accomplishments one crosses at the multiples of 33 for some reason.


So to wrap it up in a neat little bow, should i only stop at significant values when over clocking my Pandora or is 698Mhz just a little bit less then 700Mhz?
 
Not only the CPU define the performance of the system, you'll also have IO bottleneck, memory speed limit, SGX performance limits and a-like.


But in case you're CPU bottleneck then you can propably say it is linear. Yet 2Mhz wont have any impact.


From my lastest bench on a game, going from 500 to 800 give the drawing a 1/3 performance boost.
 
So as far as I know overclocking is changing the frequency of (i guess the cpu) and a high frequency equates to being able to run more complicated code or programs faster. So if that is true (I think at least in a loose sense it is) If i had a program that counted to 100 and required exactly 100sec to do it at 1hz would it be able to count to 100 in 50sec at 2hz?

Virtually. There are some timings that CANNOT change, e.g. the amount of time for devices to settle, buses to initialise, the speed of certain parts of the bus, the speed that the graphics portion can update the screen at, etc. But, yes, basically an overclocked CPU just uses higher frequencies which means it can do the same calculation in a shorter time for the component that is overclocked (it literally does one "step" per cycle - it's just that certain things take, say, 10 steps - but they still go faster overall if you speed up the time it takes for each step). However, this is sometimes (usually?) at the sacrifice of some assurance, because the computer can't just "become faster" so it gets nearer to the point where it would just fail instead and generates more heat because higher-freq = more changes of voltage = requires more power (in physical terms to "push" the voltage up and then "pull" it back down again, both of which require work) = generates more heat. It's a bit like moving your saw faster when chopping wood - it means it will cut through the wood faster but get hotter / break easier.


Not everything is that simple and some things JUST DON'T WORK at faster speeds, usually where buses intersect - e.g. your CPU talks to your PCI bus etc. which means that they have to all be overclocked or you don't see an overall gain unless something is "cpu-bound" (i.e. only running on the CPU). And not matter how you overclock, if the program waits for a Vsync or other hardware signal or timing, overclocking will not necessarily make that happen "faster".


The multiples of 33? I don't think there was any great technical reason for that, but I might be wrong - I think it just comes from original using 33MHz buses at one point (they kinda stuck there for a while because of the technology) and then using multipliers (so 33 becomes 66, 100, 133, 166, 200, etc.) to speed it up. It doesn't divide nicely, or anything like that, so I don't think there's a reason for that exactly except convention and simplicity based on the history of the PC chipset.


When overclocking, it makes little difference - every 0.1 MHz over and above specification may cause your device to fail (even if that just means a crash) / overheat and you have no idea when that is without testing. There's no particular reason to stop at a certain multiple, I shouldn't think.
 
The speed needs to be driven by a clock, literally a piece of hardware that just ticks off a known number of times, usually a piezoelectric crystal wrapped in a silicon package: apply electricity, it vibrates at a known rate (ie, 33Mhz), and those vibrations drive the clock. That's why early computers were in multiples of 33, or 66, because that was the common clock. I think later there were clocks of 66, 90, 100, 133, etc... It all depended on what the CPU could handle and what the manufacturer wanted to give you.


To get a 66mhz PC from a 33mhz clock, the CPU had an internal multiplier: for every clock tick, it would do two steps. Modern PCs will go 4 or 8 steps for every clock tick, some might even do more. They've also got faster clocks, upwards of 400, 800, and even 1000mhz. So that 800mhz clock with a multiplier of 4 will give you a 3.2Ghz CPU. That 666mhz is just a 333mhz clock times 2, or maybe a 133mhz clock times 5.


So two ways to increase the speed of a computer would be to increase its clock speed, and to increase its multiplier. If your 66mhz computer had a 33mhz clock and a 2x multiplier, you could increase it to 99mhz by changing the multiplier to 3. Early computers didn't allow this, but newer computers have these built in multipliers be variable so you can adjust them as you want. The problem is that because it is trying to do more every clock tick, if it doesn't finish everything before the next clock tick you get a monumental system failure: at best it slows down to catch up, at worst the system burns out. The Pandora is fortunate in that it just freezes when this happens. And of course increasing the actual clock speed is another way to increase speed, but again, if the CPU doesn't finish what it was doing before the next clock tick, you get a crash.


Anywho, that's the basic history behind why early computers had such "weird" numbers: 33mhz was a very cheap and common clock. Later they used 133mhz clocks, and now I think 800mhz is the most common.


The Pandora is a lot more complex than this. I'm not exactly certain what the OMAP does, but it's much more customizeable. It's got a clock which can be driven faster or slower, and it's also got a multiplier that can go up and down. When you set the CPU speed, it figures out the best clock and multiplier to get that speed. Somehow. It's magic in silicon. No need to worry about choosing "significant" values, just let the processor figure out what it needs to.
 
Last edited by a moderator:
Back
Top