Aninhumer said:
There's a difference between estimating and knowing.
Unless your program is really simple, you can't know where the time is spent without profiling.
Sorry if it's a trivial semantic point, but it could be what annoyed darkblu? (Whose comment did seem a little... sharp)
Right, I do think profiling is important. Although I almost never use a conventional profiler; I tend to do application specific profiling that more captures the sort of metrics that are relevant to the profiler. Not saying everyone should do that, of course, the stuff I deal with just tends to not mesh well with standard profiling techniques >_>
The reason why I'm talking a lot about "optimizing designs" is because this is the basic conventional wisdom I've seen promoted,
a lot:
1) Write design documentation in a high level manner that details end level results
2) Implement design in code that is as straightforward, quick to implement, and simple as humanly possible
3) Fix bugs, et al, get it "good"
4) Perform basic benchmark to see if it's "fast enough"
5) If it's not "fast enough", profile to see where time is being spent
6) Optimize slow areas to be faster until the program is fast enough
And of course, on the surface this makes a lot of sense. The problem is that it just doesn't always work, and in some domains it's not a path you can really take.
For example, consider the lower end of the embedded space, especially for applications that do a very specific thing on old hardware and have to function properly for a long time and have a very high cost of design failure (ie, the sort of stuff I do, aerospace). Here we're looking at hard real time systems with very limited part choices, you know off the bat that your processor will only be so fast, but moreover, you have to figure out just how fast you can cope with because even at these low speeds it greatly effects power and thermal envelope. So you have to have an extensive understanding of the performance characteristics of your program before it's even written on the target platform. Of course ideally you'd write as much as you can for simulation, but projects just don't have time and money for you to do all this software design before any hardware design is made.
There's also another even more insidious issue than overall clock budget. It's well and good to go with an XMHz clock, design the rest of your system logic around it, and have very high confidence that you won't exceed so and so % utilization of it. But then you come to situations where some hardware component doesn't work if you don't write to it less than X us apart, and X ends up being a pretty small number of clock cycles for Y MHz on a pretty simple but still cached processor. And you have interrupts flying that you just can't disable or your design won't work. So you have to account for these things and know that the worst case of your interrupts will fit within. THEN of course you have to VERY closely profile (here I'm talking about getting cycle logs of what the processor is doing under worst case - thank god that this platform gave me that ;p) and optimize the ever loving shit out of it if it's too close. So absolutely, profiling is a very important part of the optimization cycle, I'd never deny that, but you can't just rely on it to save you in the end under all circumstances.
Yes, sometimes something else in the system can be screwing you. In my embedded cases, I've actually done a lot of work with no OS and little or even no system library or even compiler library code brought in (in GCC, that means using _builtin_* if you want to get access to something as an instruction on the CPU.. before dropping down to inline ASM, a lot of the time). There's a tradeoff, but this is for simple/small enough things that have really hard real time characteristics where I strongly prefer full control, and it works out OK. Naturally, most people won't be working under these circumstances, but they definitely exist, way more than people realize.
The point I was trying to make is that yes, some external thing might be fouling up your results, and might be something you only realize after profiling (like me realizing SDL is taking like 9ms to update a frame on my netbook, which is actually great news because it means on Pandora where I can blast to the framebuffer I won't deal with that and performance may be better than it seems). You have to profile to see this, no doubt. The thing is, you also often have to know beforehand where you WON'T be seeing these things. And when it comes to designs that are performance critical from the start, and not decided as an afterthought to improve market value of the program, you won't be bringing a lot of unknowns into your most critical inner loops. Or rather, you shouldn't be. If you are making calls here that could potentially be big unknowns then they should be written in a way where you're fully capable of dropping in and replacing them, and not relying on them as part of your critical design path. Fortunately, with the way most programs work you're not relying on someone else for all of your critical performance, and if you are, you should get to know that something else early on (I guess in the popular advice flow we'd call this "premature profiling") so you know what you're working with and where the risks lie.