Story of Mel. (Machine Code)


Typically for imperative languages, optimizing compilation 'only' improves the constant factors, not the complexity. Hand-optimizing 'only' improves that constant a bit. And nowadays, those constant factors tend to be relatively small, and the abundance of cpu power and memory available makes it easy to assume that hand-optimization has never been a good idea.
I dunno, I can't think of anything outside of constant time factors that's different between DeSmuME and DraStic, they're both more or less imperative (DeSmuME is C++ but it is for the most part pretty C-like with singleton classes and some templates), and a lot of what's in the latter I'd consider hand-optimization.. YMMV I guess. (to be fair, they're also not functionally identical, so it's not a totally valid comparison)
Sure, even today hand-optimization can still be a good idea, in particular for things like emulators (at least the actual emulator, not the GUI stuff etc), kernels, drivers, standard libraries, etc. If memcpy() can become 20% faster by using some manually tweaked instructions instead of whatever gcc produces when you write a simple copy loop in C, then of course it still is worth the effort to do that.

But this is not anymore the case for normal applications, like a blackjack game. No sane programmer would write dialog boxes or other types of user interaction from scratch in any low-level language, let alone assembly, let alone machine code. For things like user input/output, nobody really cares anymore about speed, say whether your text editor can process 1000 keystrokes per second or 10 million keystrokes per second, because it makes no difference at all in practice (humans can't type that fast anyway).

All I was trying to say is: compilers have improved, so the extra speedup you can expect from hand-optimization has decreased. Also hardware has improved.

So for example, a couple of decades ago, you may have had some task X that takes

  • 100 seconds when written in a high-level language with compiler optimizations disabled (say gcc -O0)
  • 50 seconds with optimizing compilation (say gcc -O3)
  • 2 seconds hand-optimized
while today the same task X could very well take

  • 20 milliseconds when written in a high-level language with compiler optimizations disabled
  • 2 milliseconds with optimizing compilation
  • 1 millisecond hand-optimized
And of course this could in some cases still be a significant difference, but most likely, the difference will not be important anymore, and the disadvantage of having non-portable, harder to maintain code will outweigh the advantage of the speedup.

The bigger problem is that the number of poor programmers has dramatically increased, so what you could also get is that task X now takes

  • 30 seconds because it is written poorly with a very suboptimal time complexity (e.g. it could have been loglinear but because of stupidity or laziness, the algorithm that was implemented actually runs in cubic time)
 
Back
Top