darkblu said:
i wouldn't be surprised if the geode we discussed yesterday had the fastest (clocks-to-retire) idiv among current x86's, by virtue of its cyrix heritage ; )
Hm.
http://7-cpu.com/cpu/GeodeLX-lat.txt
39-42 cycles for 32bit, 100% unpipelined. Having a full blown hardware implementation on ARM isn't going to save you enough to be worth it, even if it's as fast per-cycle as the one on Cortex-M3 (which is of course a silly comparison since the clock speed is an order of magnitude lower). Better yet, a software or bit-stepped hardware option would be guaranteed to scale down in execution time for the lower data sizes like the x86 timings do. A hardware instruction may not do any such range detection. You could of course use software for the 8 or 16bit ones only.
Laurent said:
You perfectly know the devil is in the details, emulating a given instruction being in HW or by means of translation is not a big deal. If you have a very fast and
accurate way to detect SMC on x86, you will easily get a very well paid job at VMWare
We don't agree on the accuracy thing I'm afraid, I place it above speed and for any commercial solution that should be the case or you'll be dead when your Photoshop (which seems to use SMC a lot according to Derek Bruening) keeps on crashing for customers
I agree with all other things you wrote.
Please don't get the wrong idea, nothing in my suggestions compromised accuracy, only performance - so long as we're not talking about timing, which of course no commercial solution for virtualization makes any attempt whatsoever to model. How much performance gets hit is going to depend on the nature of the SMC patterns, even if something performs SMC a lot it might not interleave execution in a pattern that incurs a large performance penalty handling it. Of course "a lot" is subjective - I've seen all kinds of horrible SMC patterns on GBA but despite x86 being cache coherent it's not as if self modifying code is a free ride like it was in the past. It won't have the outrageous expense the simplest recompiler solutions would but it'll still likely cost you at least dozens of cycles when you do it.
I suggest you read about Transmeta's recompiler, if you're curious about some strategies. Some of them do use hardware assistance, for instance, having a small number of pages capable of having fine-grained protection. But a lot of the techniques can be applied in software.
From my own experience alone, here are some things that you can do:
- If only a small number of single instructions are modified, flag them and patch them out of the generated block so that they can be interpreted (or immediates loaded indirectly if only the immediates are modified, which is a popular form of self modifying code)
- If a large portion of a block is changed, mark the block as dirty by patching its entry so that when it's next executed it compiles a new block for itself. And keep a separate translation cache for dynamic blocks where checksums are generated so common blocks can be reused. If there's little reuse this will at least section off the translation caches so that you don't end up flushing your non-caching portions.
(the downside to these techniques is that you can't have blocks overlap, which is actually kind of a bitch)
Above all, you'll get the best performance if you have an adaptive/cascading strategy which monitors behavior because no program is going to constantly trample all over its entire code base unless it's just designed to defeat your recompiler.
By the way, this one guy I vaguely knew who was a total jerk to me once (goes by ChipX86) got a job at VMWare, so why can't I? Not that I'd necessarily want to work there.