What Does Optimized Even Mean?


1magus

Member
Joined
Jun 29, 2007
Messages
462
I know what the definition is but I see all these notes of a new emulator or some new piece of software that always says NOT FULLY OPTIMIZED YET. What has to be done that makes it optimized and what does that mean for emulators n such?
 
in lamens terms (which is all i know) it means that it doesn't yet take advantage of certain strengths that the specific hardware has to offer that could make the software run more efficiently (more quickly, with less glitches or bugs, requiring less cpu cycles to run at full speed etc).
 
"There are two (or more) ways of doing some things in this program. In some cases I used the easier to code or easier to read way of doing it that usually isn't the fastest running. If it turns out that we need more speed, these parts can be rewritten to be faster. They won't be as easy to read or understand, and it will take me longer to do, but it can be done"
 
As a very simple example, if you had something like:

Code:
a = b * 4
Multiplication operations can often take a lot longer than additions, so it might be faster to do:

Code:
c = b + b
a = c + c
This is less intuitive to write but on some hardware it actually does the same thing faster.
It won't save a lot of time just once, but if this is done for every pixel on a screen 60 times per second it can start to add up.

Usually the compiler can do really simple optimisations like this, so the real ones are a bit more complicated.

(Actually the above operation would probably be optimised to a bitshift, but this is only a simple example)
 
Actually, here's a real world example that I don't think is automatically optimized.
If you want to get or set a specific pixel (x,y) from a 320x240 bitmap, you need to access the memory at "y*320 + x" : take the y, multiply by the width, and add x. Easy to read, easy to understand, easy to fix if the width changes. That multiplication can be slow, but you can rewrite it as "(y << 8) + (y << 6) + x". Two bit shifts and an add is faster on a PC than a single multiplication but isn't as intuitive as the multiplication.
That's just x86 optimization. On our ARM processor, multiplication takes the same clock cycles as an addition, but you should get the point, yeah?
 
Turning a multiplication or division into something else, even if it's several shifts and adds or a multiply and shift (or both), is a very simple optimization for a compiler since it's very context independent.

WizardStan said:
That's just x86 optimization. On our ARM processor, multiplication takes the same clock cycles as an addition, but you should get the point, yeah?

Crap, who told you that? The only ARM processor I know of where this holds true is Cortex-M3, and that's probably because of its short pipeline. Some others can issue them every cycle but still have more than single cycle latency. In particular, 32x32 multiplies on Cortex-A8 have a throughput of 2 cycles and latency of 5 cycles. 16x32 and dual 16x16 multiplies can issue every cycle but still have a 5 cycle latency (there is an exception for mul followed by mla which triggers a fast forwarding path, so DSP like operations should work pretty well)

Multiplication time on x86 varies tremendously of course, even among the current crop actually being used much (Core 2, i7, K8, "K10", Atom), a 32x32 multiply can be anywhere from 1 to 2 throughput cycles and and 3 to 5 latency cycles. i7 in particular has a very fast 64x64 multiplier, and K8+ is as good at 32x32 (Pentium 4 had really terrible performance, but is anyone using those?). I wouldn't recommend converting a multiply on x86 to anything more than a single bit-shift, even on P4 which also has poor shift performance. That kind of optimization increases register pressure even if it saves on worst case latency (which I doubt it would).

More on topic..

To me "optimization" refers to anything that is done to a design to reduce some metric and hopefully perform better in some way - in this context usually running faster and occasionally being smaller. This should also mean doing so w/o otherwise changing the functionality of the design, although it might change other characteristics like code legibility and implementation time.

The term is actually a misnomer because you're almost never making something optimal in any kind of verifiable way. You're just making it "closer" to optimal. Mathematically, optimization actually refers to processes which make something optimal. But no one has really thought of a better term so we'll all continue to use this one. What it does mean is that nothing is really "completely optimized." When someone says that it just means that they're no longer going to be working to improve it.

Many things can be considered optimization. A design can be optimized before any code is even written. Code can be optimized at various levels that range from redesigning entire algorithms to restructuring code, performing "micro" optimizations that a compiler would miss, taking advantage of a platform specific capability, or hand writing assembly code and taking it on yourself to avoid stalls as best as possible and beat the compiler (for a lot of platforms and compilers it's still really easy to beat the compiler).

There's a great stigma of "premature optimization" that comes from a lot of people that are probably doomed to producing inefficient code. Of course, there's still a very valid point in this - get things working before you make it faster makes a lot of sense in cases where things can actually be partitioned this way. But the reason people have this mindset is because they think of optimization as nothing more than a transformation you apply to code after it is written - if you think of things this way you're liable to end up in situations where you spend a lot of effort on optimization that yields in very diminishing returns because you can't remove bottlenecks that are baked into the design and difficult for you to even see. It can be like correcting spelling errors in "Atlanta Nights."

Part of this comes from the belief that programmers can't know where time is spent without profiling. This will vary by programmer, but for a lot of programs it isn't at all hard to have a good intuitive feel for it, and simple analysis ahead of time can be done to show where things can be expected. Sometimes there'll be some other unknown bottleneck that profiling will remove (like say, SDL taking 60% of your CPU time doing something with the screen), and you should of course do some kind of profiling (not necessarily the kind that's done for you), but that doesn't really detract from my point.

Optimization is really quite a fiddly thing that involves a lot of critical thought and knowledge of a platform. It isn't usually a straightforward process that anyone can easily go through with the same level of difficulty. WizardStan's comment implies that writing "less optimal" code is merely a choice you make ahead of time to go with something neater or easier to write. It's really something that comes about after thinking about a problem a lot, analyzing different things, trying different things, and studying different things. A lot of the time optimization can result in smaller, cleaner, and more elegant solutions that may have even been easier to write if you knew the answer ahead of time.
 
Last edited by a moderator:
Exophase said:
WizardStan said:
That's just x86 optimization. On our ARM processor, multiplication takes the same clock cycles as an addition, but you should get the point, yeah?

Crap, who told you that? The only ARM processor I know of where this holds true is Cortex-M3, and that's probably because of its short pipeline.
I forget exactly. It may have just been a theoretical example when I was learning the differences between CISC and RISC architectures 15 years ago that I've been holding onto.
 
Last edited by a moderator:
Exophase said:
Multiplication time on x86 varies tremendously of course, even among the current crop actually being used much (Core 2, i7, K8, "K10", Atom), a 32x32 multiply can be anywhere from 1 to 2 throughput cycles and and 3 to 5 latency cycles. i7 in particular has a very fast 64x64 multiplier, and K8+ is as good at 32x32 (Pentium 4 had really terrible performance, but is anyone using those?). I wouldn't recommend converting a multiply on x86 to anything more than a single bit-shift, even on P4 which also has poor shift performance. That kind of optimization increases register pressure even if it saves on worst case latency (which I doubt it would).
For WizardStan's example:

Code:
mov $320,%eax
mul y
add x,%eax
vs

Code:
mov y,%eax
lea (%eax,%eax,4),%eax
shl $6,%eax
add x,%eax
Based on register availability, I might go with the latter. x86 mul is annoying since it needs EAX and EDX.


My latest project is trying to optimize this function in mupen64plus:

Code:
void c_eq_s(float *source,float *target)
{
  if (isnan(*source) || isnan(*target)) {FCR31&=~0x800000;return;}
  FCR31 = *source==*target ? FCR31|0x800000 : FCR31&~0x800000;
}
GCC 4.3 generates this enormous mess:

Code:
c_eq_s:
	subl	$12, %esp
	movl	16(%esp), %eax
	flds	(%eax)
	fsts	4(%esp)
	fstps	(%esp)
	call	__isnanf
	testl	%eax, %eax
	je	.L42
.L33:
	andl	$-8388609, FCR31
	addl	$12, %esp
	ret
	.p2align 4,,10
	.p2align 3
.L42:
	movl	20(%esp), %edx
	flds	(%edx)
	fsts	8(%esp)
	fstps	(%esp)
	call	__isnanf
	testl	%eax, %eax
	jne	.L33
	flds	4(%esp)
	flds	8(%esp)
	fcomip	%st(1), %st
	fstp	%st(0)
	jne	.L41
	movl	FCR31, %eax
	orl	$8388608, %eax
.L38:
	movl	%eax, FCR31
	addl	$12, %esp
	ret
	.p2align 4,,10
	.p2align 3
.L41:
	movl	FCR31, %eax
	andl	$-8388609, %eax
	jmp	.L38
	.size	c_eq_s, .-c_eq_s
I plan to use this instead:

Code:
mov arg1,%eax
mov arg2,%ecx
flds (%eax)
flds (%ecx)
mov FCR31,%eax
mov $0x800000,%ecx
or %ecx,%eax
xor %eax,%ecx
fucomip %st(1)
fstp %st(0)
cmovne %ecx,%eax
cmovp %ecx,%eax
mov %eax,FCR31
ret
ARM code will be similar, but use the V flag to test for NaNs, and can use BIC instead of OR+XOR.
 
Last edited by a moderator:
Ari64, http://siyobik.info/index.php?module=x86&id=138

Look at the form of imul that takes an immediate operand and has separate source and destination r/m operands. Unless you need a long result, and you usually don't, this form is the way to go and doesn't suffer from the things you mention.
 
Exophase said:
There's a great stigma of "premature optimization" that comes from a lot of people that are probably doomed to producing inefficient code. Of course, there's still a very valid point in this - get things working before you make it faster makes a lot of sense in cases where things can actually be partitioned this way. But the reason people have this mindset is because they think of optimization as nothing more than a transformation you apply to code after it is written - if you think of things this way you're liable to end up in situations where you spend a lot of effort on optimization that yields in very diminishing returns because you can't remove bottlenecks that are baked into the design and difficult for you to even see. It can be like correcting spelling errors in "Atlanta Nights."
never seen that movie, but i've seen plenty of 'optimised' code which i wish had not been. the only thing worse than code written by people without adequate understanding of the task, is same code 'optimised' by those same people.

the biggest 'early stage' optimisation i normally tolerate is developer's trying to get maximal understanding of the task/ task domain, and getting the hang on the architecture/dev platform at hand, be that in separate test suites. that normally suffices for them not to make stupid choices in design methodology/decisions. and this is before we even touch on subjects like 'project volatility' where specs change half way to deadlines, etc.

from there on it's profiling, profiling, and again profiling. just because even when you're god almighty in a certain task domain, the platform in all its components was not created by you, and where you think this 3rd party lib did X, it actually might do Y.

Part of this comes from the belief that programmers can't know where time is spent without profiling. This will vary by programmer, but for a lot of programs it isn't at all hard to have a good intuitive feel for it, and simple analysis ahead of time can be done to show where things can be expected. Sometimes there'll be some other unknown bottleneck that profiling will remove (like say, SDL taking 60% of your CPU time doing something with the screen), and you should of course do some kind of profiling (not necessarily the kind that's done for you), but that doesn't really detract from my point.
actually it does, as it places profiling at its rightful place, which you, for some reason, are not willing to acknowledge.

Optimization is really quite a fiddly thing that involves a lot of critical thought and knowledge of a platform. It isn't usually a straightforward process that anyone can easily go through with the same level of difficulty. WizardStan's comment implies that writing "less optimal" code is merely a choice you make ahead of time to go with something neater or easier to write. It's really something that comes about after thinking about a problem a lot, analyzing different things, trying different things, and studying different things. A lot of the time optimization can result in smaller, cleaner, and more elegant solutions that may have even been easier to write if you knew the answer ahead of time.
I think what you're referring above is plain proficiency in the task domain. or what i call 'people not making idiotic decisions early on in the project'.
 
Last edited by a moderator:
darkblu said:
actually it does, as it places profiling at its rightful place, which you, for some reason, are not willing to acknowledge.

Come again? What am I not willing to acknowledge? That there's a place for profiling? Or that profiling only belongs in certain contexts? What are you saying exactly?

Yeah, I've seen a lot of code that was wrought to hell because someone tried "optimizing" it in a haphazard or unnecessary fashion, possibly from an approach of writing everything in a given obfuscated way regardless of benefit. I've also seen some code - in particular, emulators - that people desperately try to optimize and think that they've hit a wall in terms of performance because the design is limited. This goes well beyond "not making idiotic decisions", the design limitations at work here are usually much more complex and sometimes subtle than that. Get back to me when you see emulators like VBA or desmume finally become competitive performance-wise after the proper "non-premature" optimization is performed.
 
Last edited by a moderator:
I tend to use "premature optimisation" as a specific descriptor for optimisations that I think are (or were) a bad idea.
I'd never use it as some kind of blanket definition where any optimisations done before you reach 1.0 are always bad.

In any case, there is clearly a difference between design optimisation and implementation optimisation.
The former should almost never be considered premature, the latter can be, but is not always.

Part of this comes from the belief that programmers can't know where time is spent without profiling. This will vary by programmer, but for a lot of programs it isn't at all hard to have a good intuitive feel for it, and simple analysis ahead of time can be done to show where things can be expected. Sometimes there'll be some other unknown bottleneck that profiling will remove (like say, SDL taking 60% of your CPU time doing something with the screen), and you should of course do some kind of profiling (not necessarily the kind that's done for you), but that doesn't really detract from my point.
There's a difference between estimating and knowing.
Unless your program is really simple, you can't know where the time is spent without profiling.
Sorry if it's a trivial semantic point, but it could be what annoyed darkblu? (Whose comment did seem a little... sharp)
 
TylerAW said:
I know what the definition is but I see all these notes of a new emulator or some new piece of software that always says NOT FULLY OPTIMIZED YET. What has to be done that makes it optimized and what does that mean for emulators n such?

I hope you're happy Tyler. You got the devs all math-crazy. I love that you guys are like, "Well, to put it simply *MATH*"

No, to put it simply is like, the dev wanted to build a chair. He did so. And while you *can* sit on the chair, it's not the best chair to sit in that way. But he's working on it. To make it the best damn chair he can muster.
 
Last edited by a moderator:
darkblu said:
Exophase said:
It can be like correcting spelling errors in "Atlanta Nights."
never seen that movie
It's not a movie, it's a book. It's a pretty funny story: there was a company that was a vanity press - they publish stuff if you pay them a fee, but they were trying to con people into thinking they were a "traditional" publisher (i. e. one where the author actually gets paid instead of having to pay). So a bunch of science fiction authors deliberately wrote the worst book they could, to prove that this company would publish anything if they were paid.

http://en.wikipedia.org/wiki/Atlanta_Nights
 
Last edited by a moderator:
"Optimisation" is just a general term for making a program faster / better / more efficient (in terms of CPU use, RAM, etc.)

It can be many things. Low-level assembler optimisation is really at the extreme end of optimisation. As people have said, the compiler mostly does that sort of thing for you and can be extremely clever. The programming language C used to have a "register" keyword that, if the programmer was smart, they could use to put certain variables into processor registers instead of RAM and thus gain orders-of-magnitude speedups. Today, the compiler is at least as good as any programmer in using that kind of technique more efficiently and, thus, most C programmers never encounter the "register" keyword any more, and most compilers just ignore it if the programmer does use it because they know they can do a better job.

But compilers aren't 100% intelligent so often a slight restructuring of the code, or some low-level fiddling to make "critical paths" (the bits of the program that need to happen fastest because they are called most often) can be done that the compiler would not know. It's almost impossible to explain to a compiler exactly what the numbers it's handling represent, so the human can often do some "intellectual" optimisation instead of just "organisational" optimisation (i.e. actually looking at the problem and its constraints in an entirely different way as opposed to just blindly trying to speed up a calculation where you can't be sure what the input/result represent).

Then there is profiling, which is running the program itself on the machine you intend your users to run it on, and recording lots and lots of statistics. How often the screen is redrawn, how much time the computer spends in a certain routine etc. Armed with that information, you can recompile the program and the compiler can do a better job by organising the code better now that it knows that certain routines are rarely called, that some routines MUST be made faster while other hardly matter, etc. This is quite simple in terms of doing it, and can provide certain benefits for a single-architecture (i.e. you couldn't "profile" some C code intended for Windows PC's and expect anywhere near the speedup of the C code compiled for a GP2X - because the hardware, software, user pattern is so variable, the statistics would never be perfect but they can still help optimise a bit).

These, though, are all "after-the-fact" optimisations - ones that are done when your program is working roughly how it should and you're just trying to make it go faster. You *can* see enormous speedups but as you go through profiling, manual optimisation and assembly-level optimisation, your gains for the effort you put in get less and less and less. You can quite literally spend years optimising something in assembler only to get a tiny millisecond speedup.

The most important part, though, is to program efficiently in the first place. If you have a thousand images in your game and you load them all from disk every time they need to be drawn, your code will ALWAYS be slower than a programmer who designs the software to load the images into RAM and just use them directly from memory each time. That sort of optimisation can't be done by a compiler - it's too high-level and requires far too much insight into how the code works for a computer to understand it. Additionally, if you want to draw 1000 sprites on a 10,000 by 10,000 board, you could either store the location inside the sprite and thus call "draw_sprite" 1000 times, or store it on the map tiles and call it 1,000,000,000 times. Or you could only loop over the part of the board that's actually showing and thus call draw_sprite once for each on-screen character with a tiny little check to see if you're on the screen being called 10,000 times. It's all about design.

So optimisation is really too general to just say what it is. There are thousands of ways of speeding up code. If a program is well designed, it will make the biggest difference - everything else is for when the code is as efficient as it can be made and it just needs to be "tuned" to a particular use, computer, etc.
 
Aninhumer said:
There's a difference between estimating and knowing.
Unless your program is really simple, you can't know where the time is spent without profiling.
Sorry if it's a trivial semantic point, but it could be what annoyed darkblu? (Whose comment did seem a little... sharp)

Right, I do think profiling is important. Although I almost never use a conventional profiler; I tend to do application specific profiling that more captures the sort of metrics that are relevant to the profiler. Not saying everyone should do that, of course, the stuff I deal with just tends to not mesh well with standard profiling techniques >_>

The reason why I'm talking a lot about "optimizing designs" is because this is the basic conventional wisdom I've seen promoted, a lot:

1) Write design documentation in a high level manner that details end level results
2) Implement design in code that is as straightforward, quick to implement, and simple as humanly possible
3) Fix bugs, et al, get it "good"
4) Perform basic benchmark to see if it's "fast enough"
5) If it's not "fast enough", profile to see where time is being spent
6) Optimize slow areas to be faster until the program is fast enough

And of course, on the surface this makes a lot of sense. The problem is that it just doesn't always work, and in some domains it's not a path you can really take.

For example, consider the lower end of the embedded space, especially for applications that do a very specific thing on old hardware and have to function properly for a long time and have a very high cost of design failure (ie, the sort of stuff I do, aerospace). Here we're looking at hard real time systems with very limited part choices, you know off the bat that your processor will only be so fast, but moreover, you have to figure out just how fast you can cope with because even at these low speeds it greatly effects power and thermal envelope. So you have to have an extensive understanding of the performance characteristics of your program before it's even written on the target platform. Of course ideally you'd write as much as you can for simulation, but projects just don't have time and money for you to do all this software design before any hardware design is made.

There's also another even more insidious issue than overall clock budget. It's well and good to go with an XMHz clock, design the rest of your system logic around it, and have very high confidence that you won't exceed so and so % utilization of it. But then you come to situations where some hardware component doesn't work if you don't write to it less than X us apart, and X ends up being a pretty small number of clock cycles for Y MHz on a pretty simple but still cached processor. And you have interrupts flying that you just can't disable or your design won't work. So you have to account for these things and know that the worst case of your interrupts will fit within. THEN of course you have to VERY closely profile (here I'm talking about getting cycle logs of what the processor is doing under worst case - thank god that this platform gave me that ;p) and optimize the ever loving shit out of it if it's too close. So absolutely, profiling is a very important part of the optimization cycle, I'd never deny that, but you can't just rely on it to save you in the end under all circumstances.

Yes, sometimes something else in the system can be screwing you. In my embedded cases, I've actually done a lot of work with no OS and little or even no system library or even compiler library code brought in (in GCC, that means using _builtin_* if you want to get access to something as an instruction on the CPU.. before dropping down to inline ASM, a lot of the time). There's a tradeoff, but this is for simple/small enough things that have really hard real time characteristics where I strongly prefer full control, and it works out OK. Naturally, most people won't be working under these circumstances, but they definitely exist, way more than people realize.

The point I was trying to make is that yes, some external thing might be fouling up your results, and might be something you only realize after profiling (like me realizing SDL is taking like 9ms to update a frame on my netbook, which is actually great news because it means on Pandora where I can blast to the framebuffer I won't deal with that and performance may be better than it seems). You have to profile to see this, no doubt. The thing is, you also often have to know beforehand where you WON'T be seeing these things. And when it comes to designs that are performance critical from the start, and not decided as an afterthought to improve market value of the program, you won't be bringing a lot of unknowns into your most critical inner loops. Or rather, you shouldn't be. If you are making calls here that could potentially be big unknowns then they should be written in a way where you're fully capable of dropping in and replacing them, and not relying on them as part of your critical design path. Fortunately, with the way most programs work you're not relying on someone else for all of your critical performance, and if you are, you should get to know that something else early on (I guess in the popular advice flow we'd call this "premature profiling") so you know what you're working with and where the risks lie.
 
Last edited by a moderator:
In a nutshell, optimizing code is spending more effort on it to make it run faster or better.
 
Addendum: the phrase "premature optimization is the root of all evil" probably is best when defining "premature" to mean prior of knowing what the performance characteristics of your code and the performance requirements of your project are. If you "know" both things well enough then design considerations may not be premature even if they're done before full completion of the code.

Sadly people almost always take it to mean any deviation from the dev cycle I gave. And "optimization" there refers to doing code a certain kind of way, even if it's during the first pass, not necessarily improving code - technically this is incorrect, but it still applies.
 
Back
Top