levi
Still fresh, damnit!
I don't think that's so true with the Genesis, as it is with something like the Saturn. In some ways it's very much built like an Amiga, but I'm not sure the Amiga has dual-ported VRAM, so all memory accesses on the CPU have to be delayed until the horizontal flyback at the very minimum, and might get delayed until the vertical flyback before vsync. The Genesis does have VRAM, so it can continue to read and write memory at full tilt while the system's rendering the framebuffer. The Amiga I know has things of the class 'coprocessors' to allow it to do stuff like blitting, and I'd guess the Genesis does have some fairly advanced ASICs to handle sprites and scrolling.
I'm slightly surprised that Samurai Shodown in that video didn't become unplayably fast in that comparison. Anyone who's trained themselves to play fighting games, and particularly to learn any sort of combo, has learned frame timing, and if it used to be a stable 30fps and is now somewhere near 60fps (or at worst, flopping between 30 and 60fps unpredictably) then it'd become unplayable. But it's fair to say I couldn't actually spot any difference between the two videos of that game even focussing on the barrels (but I'm not sure this computer can actually display 60fps youtube vids, and downscales them to 30fps automatically). And Samurai Shodown is not a game I've ever actually trained at, so I'd have a hard time telling you if it was the game or me not being able to execute a particular move at any given time. But the animation didn't immediately speed up, so either it runs 60fps natively, or it's actually timing animation on something other than the cpu clock tick modulated with the vsync tock. And in my experience, people just didn't use external timers much when coding when just targetting 16.67ms worked so well all the time.
These days with AI subsystems kicking in at unpredictable times, and the workload depending highly on which direction you're looking in, games have to build animation timings and movement deltas using constants that get multiplied by however many frames have ticked past since you started calculating where everything is. That's very difficult to do, because a lot of the time you don't know how long it's going to take until you've at least done some work, and if you set the multiplier wrongly and you see your target frame tick past, you can either sit and wait for the next vsync, and you're animation will be a little bit wonky, or you can switch framebuffers half way through so you're at least on the right animation frame at the end of the rendered frame and can start working on the next frame early, which is why we have tearing these days. Genesis games are just simpler, so if you measure your execution loop to take under 16ms, or somewhere between 16 and 33ms, you can set your movement constants to be suitable for either 60fps or 30fps operations. You can see that bike racing game in the video was never designed to run at 60fps, so when the CPU gets overclocked and it suddenly can run faster, it runs too fast. It may still be playable, but that's beside the point here.
I'm slightly surprised that Samurai Shodown in that video didn't become unplayably fast in that comparison. Anyone who's trained themselves to play fighting games, and particularly to learn any sort of combo, has learned frame timing, and if it used to be a stable 30fps and is now somewhere near 60fps (or at worst, flopping between 30 and 60fps unpredictably) then it'd become unplayable. But it's fair to say I couldn't actually spot any difference between the two videos of that game even focussing on the barrels (but I'm not sure this computer can actually display 60fps youtube vids, and downscales them to 30fps automatically). And Samurai Shodown is not a game I've ever actually trained at, so I'd have a hard time telling you if it was the game or me not being able to execute a particular move at any given time. But the animation didn't immediately speed up, so either it runs 60fps natively, or it's actually timing animation on something other than the cpu clock tick modulated with the vsync tock. And in my experience, people just didn't use external timers much when coding when just targetting 16.67ms worked so well all the time.
These days with AI subsystems kicking in at unpredictable times, and the workload depending highly on which direction you're looking in, games have to build animation timings and movement deltas using constants that get multiplied by however many frames have ticked past since you started calculating where everything is. That's very difficult to do, because a lot of the time you don't know how long it's going to take until you've at least done some work, and if you set the multiplier wrongly and you see your target frame tick past, you can either sit and wait for the next vsync, and you're animation will be a little bit wonky, or you can switch framebuffers half way through so you're at least on the right animation frame at the end of the rendered frame and can start working on the next frame early, which is why we have tearing these days. Genesis games are just simpler, so if you measure your execution loop to take under 16ms, or somewhere between 16 and 33ms, you can set your movement constants to be suitable for either 60fps or 30fps operations. You can see that bike racing game in the video was never designed to run at 60fps, so when the CPU gets overclocked and it suddenly can run faster, it runs too fast. It may still be playable, but that's beside the point here.