Binary Translation Using Peephole Superoptimizers


According to a small stub article on Wikipedia, the Code Morphing Software that Transmeta used was a JIT compiler after all. I guess I got off on a wild tangent.

Hopefully someday direct binary executables will only happen after installation. I think that all distributable packages will use a bitcode for translation. It's not possible in C/C++ but in other languages it is.

I'm sorry I even posted to this discussion.
 
Samurai_Crow said:
I never said that emulators suck. I consider them to be a necessary evil in the presence of closed-source code.

You said that the only reason that they aren't native speed or better is because they're poorly written. Explain exactly how they're evil? I sincerely hope you're misusing that phrase.

Samurai_Crow said:
If you're talking about translation within an emulator then I've missed the point entirely. Peephole optimization is about the only kind that is possible within an emulator due to JIT compilation.

Emulation and translation are effectively synonymous. Yes, system level emulation entails a lot more than user mode emulation, but they're both still emulation and both can employ binary translation.

Samurai_Crow said:
As for whole-program translation, I started out by referring to Transmeta's Code Morphing Software which certainly did do direct binary to binary translation to get Windows code to run on its Crusoe processor. It was not a JIT as far as I know of, nor an emulator. It converted from one instruction set to another. That's what I've been talking about all along.

Crusoe is most certainly not a "whole program optimizer", far from it - such a thing would be utterly impossible to do real time in hardware. A "JIT" as you refer to it is most likely exactly what it is. One cannot gauge how good Crusoe was because we have no idea how it'd perform if comparable binaries were compiled for it natively. In practice what it was was a CPU that ran x86 more slowly than the competition. But it doesn't take a lot of insight to determine that there are a lot of things you can roll into hardware to make emulation more efficient, especially if you had x86 squarely in mind to begin with. Which I imagine they did, seeing as how they never managed to support anything else.

Samurai_Crow said:
Lastly, hogging RAM sometimes means hogging cache due to cache misses. That certainly makes a difference in processor speed since it's wasted memory bandwidth

That makes no sense! How is having code present in RAM going to cause cache misses when it's never even executed? Are you SURE you understand how cache works?
 
Exophase said:
Samurai_Crow said:
I never said that emulators suck. I consider them to be a necessary evil in the presence of closed-source code.

You said that the only reason that they aren't native speed or better is because they're poorly written. Explain exactly how they're evil? I sincerely hope you're misusing that phrase.
The phrase "necessary evil" means that they exist because they have to in the presence of foreign closed-source code but if there were no such thing as foreign binaries, there would be no further need of emulators.
Exophase said:
Samurai_Crow said:
If you're talking about translation within an emulator then I've missed the point entirely. Peephole optimization is about the only kind that is possible within an emulator due to JIT compilation.

Emulation and translation are effectively synonymous. Yes, system level emulation entails a lot more than user mode emulation, but they're both still emulation and both can employ binary translation.
I was talking about binary-to-binary translation on the false assumption that that was how Transmeta's Code Morphing Software worked.
Exophase said:
Samurai_Crow said:
As for whole-program translation, I started out by referring to Transmeta's Code Morphing Software which certainly did do direct binary to binary translation to get Windows code to run on its Crusoe processor. It was not a JIT as far as I know of, nor an emulator. It converted from one instruction set to another. That's what I've been talking about all along.

Crusoe is most certainly not a "whole program optimizer", far from it - such a thing would be utterly impossible to do real time in hardware. A "JIT" as you refer to it is most likely exactly what it is. One cannot gauge how good Crusoe was because we have no idea how it'd perform if comparable binaries were compiled for it natively. In practice what it was was a CPU that ran x86 more slowly than the competition. But it doesn't take a lot of insight to determine that there are a lot of things you can roll into hardware to make emulation more efficient, especially if you had x86 squarely in mind to begin with. Which I imagine they did, seeing as how they never managed to support anything else.
Crusoe was not an x86 compatible but ran all of the x86 code in a JIT because it consumed less power than a true x86 back when it was made. It was based on a Very Long Instruction Word (VLIW) architecture that was a failure in CPUs but made headlines in GPUs made by other companies later on.
Exophase said:
Samurai_Crow said:
Lastly, hogging RAM sometimes means hogging cache due to cache misses. That certainly makes a difference in processor speed since it's wasted memory bandwidth

That makes no sense! How is having code present in RAM going to cause cache misses when it's never even executed? Are you SURE you understand how cache works?
They fetch cache rows by using burst-fetches to grab sequential sections of memory that are wider than the bus width. Sometimes they accidentally get some piece of some other section of code that came before or after the section they were fetching as a result. Most caches are multi-way set associative so they don't really care about the side effects like that. IMO every little bit counts.

If you want an apology I'll give it to you. I'm sorry I trespassed in your thread. I was operating on false assumptions about binary-to-binary translation.
 
Samurai_Crow said:
The phrase "necessary evil" means that they exist because they have to in the presence of foreign closed-source code but if there were no such thing as foreign binaries, there would be no further need of emulators.

The phrase necessary evil means that a negative expense is accepted to avoid greater expense. Emulators are not an expense, they are not evil. The term is really pretty self explanatory, so...

That aside, I find what you're saying to be severely flawed. A vast majority of things that people emulate were written for custom hardware, and a large degree of it was written at least partially in assembly language, at least not entirely. If every game made for a game console in the 80s and 90s were open source then we'd still be using emulators to play them because it'd be much less work than porting all of them. In fact, that's probably even true for some software today, although obviously we've moved towards more common ground that makes porting easier.

Samurai_Crow said:
I was talking about binary-to-binary translation on the false assumption that that was how Transmeta's Code Morphing Software worked.

You must mean something different from what can be expected by "binary to binary translation" - that usually implies machine code to machine code translation which a so-called "JIT" would be performing. If you mean translating a complete executable data file to another complete executable file, it's completely absurd to think that a processor would do that. Especially one that runs machine code for a variety of platforms, including such code which might not exist as a traditional executable file (say, boot loaders)

Samurai_Crow said:
Crusoe was not an x86 compatible but ran all of the x86 code in a JIT because it consumed less power than a true x86 back when it was made. It was based on a Very Long Instruction Word (VLIW) architecture that was a failure in CPUs but made headlines in GPUs made by other companies later on.

I am completely aware of what Crusoe was and how it worked! I think it shouldn't be possible to read my post and not be aware of this. I get the feeling that you're just skimming what I'm writing and maybe picking out a few things in isolation.

Also, what? GPUs don't use VLIW. I think you're confusing it with SIMD (which these days, some are and some aren't). Some DSPs use VLIW though (TI's C6x series is a very blatant example, but others to a more narrow degree, like Blackfin)

Samurai_Crow said:
They fetch cache rows by using burst-fetches to grab sequential sections of memory that are wider than the bus width. Sometimes they accidentally get some piece of some other section of code that came before or after the section they were fetching as a result. Most caches are multi-way set associative so they don't really care about the side effects like that. IMO every little bit counts.

Nice try, but that isn't going to be what's happening after translation because the translated code is what's going to be fetched into instruction cache, not the original code. And the translated code won't contain translated versions of code that has never been touched. Even if that weren't the case though, that's really grasping at straws because dead code (the kind you were referring to) will tend to appear in relatively large blocks, not sprinkled in the middle of live blocks.

Samurai_Crow said:
If you want an apology I'll give it to you. I'm sorry I trespassed in your thread. I was operating on false assumptions about binary-to-binary translation.

I'm not really sure what you're apologizing for, but I would perhaps prefer it if you didn't seem to misread a lot of the things I'm typing. Not that I think that's intentional.
 
I'm going to stop posting now. What I thought you were talking about by "translation" was an off-line non-realtime compiler-like-thing that would convert foreign binaries to native ones. It is apparently impossible with current technology. If I knew that you were talking about JIT compilation I would have kept my mouth shut and browsed somewhere else.
 
Samurai_Crow said:
I'm going to stop posting now. What I thought you were talking about by "translation" was an off-line non-realtime compiler-like-thing that would convert foreign binaries to native ones. It is apparently impossible with current technology. If I knew that you were talking about JIT compilation I would have kept my mouth shut and browsed somewhere else.

I'm talking about either. Most binary translation techniques do apply to either, although there ARE things that can make static recompilation impossible (less so in user emulation), it's not a show stopper for all applications on all platforms. Did you read the paper that this thread is about? It was utilized for static recompilation that involved translating one executable to another. You should definitely give it a read, it's pretty interesting. However, you will have to note that even this technique with a lot of brute force strategies that took several days of CPU time in preprocessing a system for still didn't generally come close to an average of 100% on real world programs (if you can call a portion of SPECint 2000 that)

I actually think that in practice there isn't much difference between the quality of code a dynamic recompiler can produce vs what a static recompiler can. In theory you can perform better optimization on an entire program if you're looking at it ahead of time, but in practice you won't have enough resources to analyze that far ahead. On the flip side, you can do a lot more than just peephole optmizations in a dynamic recompiler. There are dynarecs that perform not only block level optimizations but some inter-block ones as well.

I personally wouldn't ever attempt a static recompiler, for a whole variety of reasons. But a lot of what I'd like to do would work about as well for a dynamic recompiler.
 
Back
Top