Binary Translation Using Peephole Superoptimizers


Exophase

Nothing good will ever come of Exophase.
Joined
Sep 21, 2006
Messages
10,307
Age
41
Location
Cleveland OH
I posted this on emutalk but only one person responded and I didn't really like his responses so I'm posting it here too ;P

I stumbled upon this paper recently. It's an extended appliaction of "superoptimization" work that the same person did a couple years prior as part of his dissertation.

http://theory.stanford.edu/~sbansal/pub ... index.html

Personally I'm a little surprised it works as well as it does - not sure if this is a testimony to how powerful the technique is or a statement of inadequecy for the competition. Worth bearing in mind is that Qemu is now using a different translation strategy, so it might perform a bit better than the one listed.

Either way, it shows that getting 50+% native performance in user mode only emulation isn't totally unrealistic. Would love to see how it runs on games, if indeed it can manage any - necessary OS support aside, this would entail much more complicated programs, which brings about more avenues for failure. But if it can work well then imagine if say, x86->ARM performed at a similar level. With user mode Linux emulation and WINE maybe emulating something that runs on Windows needs a Pentium 2 won't be as unrealistic as everyone believes it will be, on the highest end ARM SoCs.

The source code is going to be up soon (or so his website says - maybe he changed his mind since he's starting his own business). I really want to see the translation rules (only 750!) for myself. What I've seen so far does not suggest anything the compiler figured out that is too difficult to optimize towards using propagation techniques, save for a few that look like algebraic simplifications. I also think that much faster register allocation can be done without sacrificing a lot in the result. Of course, for something like x86->ARM you can possibly get away with very simplistic register allocation.
 
You're a clever one Exophase. For most of us this is beyond our understanding. :)
I appreciate the link/repost. Reading through it now.
 
Exophase said:
The source code is going to be up soon (or so his website says - maybe he changed his mind since he's starting his own business). I really want to see the translation rules (only 750!) for myself.
Is the source code you're talking about not the file called superoptimizer-112606.tgz ?

EDIT: this source seems to contain an optimizer only for x86 -> x86.
 
hlide said:
Exophase said:
The source code is going to be up soon (or so his website says - maybe he changed his mind since he's starting his own business). I really want to see the translation rules (only 750!) for myself.
Is the source code you're talking about not the file called superoptimizer-112606.tgz ?

EDIT: this source seems to contain an optimizer only for x86 -> x86.

The native optimizer is much less interesting IMO, and besides, doesn't do anything that amazing vs good compilers.

I actually find it kind of funny what the author considers "compute intensive" tasks.. like "traverse a linked list." Even most of the algorithms there that involve actual math should still only require a few variables. Things start to matter much more when you have a larger set of active variables, and I think that's what stresses the more "real" benchmarks he's doing.

On the other hand, I don't like the emphasis on such short running benchmarks, unless in the real world we're constantly running tasks for just a few minutes. I think more likely we're keeping the same software open for a long time. Especially things like games. I never liked how people look at the cost of translation vs execution, when translation is essentially an O(1) problem with respect to time and execution is an O(n) problem. This also leads to the logical fallicy that I've seen mentioned a few times now, that the conventional wisdom of a program spending 90% of its time in 10% of its code means that 90% of its code is executed less than n times invariant of how long a program is ran. I would instead argue that most pieces of code are executed either once or as a function of time - if something only happens once per frame then it might only use 0.1% of CPU time, but it'll still be executed thousands of times after a minute. So it's kind of rubbish to think a lot about translation to execution ratios, which a lot of recompiler authors do when deciding to persue "hot spot" recompilation, which I think was more popularized by Java and HP for their particular applications than backed by general purpose research that shows it's always the best choice.

Also, a fast recompiler can be faster than loading translated code off of storage. Of course this means that there's a limit to how many operations it can do, but disk and solid state transfers are very slow in comparison to memory. So there's a pretty large amount of head room. I just don't think that many recompilers are designed with speed in mind. Shooting for a heavy brute force register allocation scheme from the start is probably taking a major toll for the approach in the paper, I kind of wonder how it'd perform vs something simpler. But like I said in the original post, for some translations, like x86 to ARM for instance, you have to worry much less about spending resources on register allocation algorithms.

I'd like to see more how it compares with something running for hours, where the translation time would really become negligable. Until then I can't be so easily won over by his results compared to Rosetta. One thing I found interesting is that he mentions that weaker recompilation could be applied to "cold" areas (again, I don't think you'll win anything outside of short length benchmarks which shouldn't count) but he never mentions exactly how you'd go about tracking what's hot and cold in a static recompiler (or a dynarec w/o an interpreter, for that matter). You could sort of find out statically but I doubt it'll tell you that much.

pelrun; He makes it look "automatic", but actually it glosses over a lot of things that are major amounts of work. Consider:

- Has to be able to enumerate source and target instruction sequences, meaning it has to know how to emit (almost) every instruction of both (not just some subset that are needed, since that "research" defeats a lot of the purpose)
- Has to be able to evaluate the transitions of a machine state as boolean equivalence. That's a huge, huge amount of work, but it's glossed over as if it's nothing. IMO, it's much more work than writing an interpreter.
- Has to know a lot about the timing of the target CPU in order to come up with cost functions - of course these are just static numbers, so possibly statistical analysis has to be done just to come up with good numbers. Even then..
- In order to get the best performance it had to rearrange the source stream to "de-optimize" it, saying it knows something about the scheduling of the original processor.. and this isn't a trivial pass either.
- Although it's not clear where it happens, it probably does liveness analysis on the target instruction set to tighten the results of the optimized output (a lot of the time, especially with such small peephole lengths, something will get propagated but not eliminated)

What I'm really getting at is that a lot of work is done in the domains of the source and destination languages. He makes it sound like it's a lot less work than using a sufficient intermediate language, but if you ask me he's just dressing it up that way. If you design an IL to be extensible from the start then it can probably support the core (important parts) of a lot of ISAs easily. Besides that, in the real world there's not a lot of practical value in supporting more than a handful of ISAs to begin with. He tried to make the point that his approach would be more portable, but then why only show examples for PPC->x86? What if other paths don't perform as well? At least he could have done x86->PPC, especially since there is a another commercial platform to compare with there. His approach seems to push a lot of work onto the source and targets that would be done on the IL in a more typical multi-platform approach, which makes me question if the strengths he says he has are not really working against him.

But the results do kind of speak for themselves, so I dunno. Maybe if someone convinces him to do x86->ARM we'll get to see how that fares. There are a lot of optimizations you can do (2->3 way addressing via move propagation, shift propagation, memory increment propagation, memory to register aliasing, conditional blocking.. the latter his superoptimizers did find, but for how complex of sequences?), I wonder how much an approach with such a limited window would get them. He'd also want to include scheduling to make it run best on Cortex-A8...
 
okay, i make a confusion between his "PEEP" and "SO". Anyway, looking at SO source code, I was scared by the amount of source files (supposedly he reused SO in PEEP). But, wait, it is a static dynamic translator so no need to put it in an emulator, right ? I guess, the only interesting point is how he applies the rules to translate an ISA to another ISA and the cost time and efficiency of such implementation. However due to its static nature, I wonder how it handles "switch" cases compiled into an array of indirect jumps (can it determine the bounds of such arrays, etc.) or static codes which are not recorded in the object file to translate.
 
hlide said:
okay, i make a confusion between his "PEEP" and "SO". Anyway, looking at SO source code, I was scared by the amount of source files (supposedly he reused SO in PEEP). But, wait, it is a static dynamic translator so no need to put it in an emulator, right ? I guess, the only interesting point is how he applies the rules to translate an ISA to another ISA and the cost time and efficiency of such implementation. However due to its static nature, I wonder how it handles "switch" cases compiled into an array of indirect jumps (can it determine the bounds of such arrays, etc.) or static codes which are not recorded in the object file to translate.

He talks about this in the paper. Of course, he trivializes a lot of it...

The program is scanned for indirect branch points - the heuristic used is to check everything in the data section, the global constants section, and operand immediates. Then a LUT is constructed to go from these targets to the translated code. However, instead of actually branching into the translated code directly it branches to a prologue stub for that address that performs register map conversion - what he doesn't make clear is that you actually need a stub for every source PC/dest PC pair, not just every dest PC. For switches the relationship is probably one to one in most C code, but function pointers and especially vtables are another story.

He says that this approach is easily susceptible to "attack" (I guess someone doesn't like you running their code away from their OS or something), which is of course true.. but probably some real use instances would fail too. His solution to this is to make all instructions possible entry points, which of course would be even worse if you did it for a platform that doesn't have fixed width instructions. What he doesn't mention is that doing this would mean that any peephole sequence could be interrupted, which seems like it'd defeat most of the point (if you wanted to make it actually foolproof)..

He also glosses over how return statements are handled.. it uses the same technique as Dynamo, using "prediction." He doesn't go into details, but it probably works like if(return_pc == last_return_pc) goto last_translated_return_pc_stub; else indirect_branch(); - the funny thing is that he says this removes the need to perform address map translation. This makes no sense to me, since you can't determine a return address at static time (okay, you can know who it could be out of many).. it sounds like maybe the register maps get folded together in a weighted fashion so that they all go to the same thing. With calling conventions separating out registers pretty decently this might work alright anyway, I don't know.

What I found most absurd was how easily he glossed over self-modifying code and self-loading code. Of course the solution was "load a dynarec", and he acts as if the transition is very simple.. I'm not convinced.

EDIT: Here's his PhD thesis, maybe it goes over things in more detail: http://theory.stanford.edu/~aiken/publi ... bansal.pdf

What I would also like to know is why he only tested a subset of SPEC2000. Suspicious. The other ones may have failed outright (notice that the only one of the tests that's not listed for peep is one that outright fails in Linux anyway)
 
You're right, of course - the rule generation is the top level of a very large amount of preparatory work (and I would hardly fault your analysis of the deeper issues - your bona-fides are better than mine :) )

The main advantage of doing it this way seems to be if you specifically want to create a set of translators between N different architectures instead of just two, which is a degenerate case. Two backends only gives you two different resultant translators, but three backends gives you six.
 
I'm not sure what technique Transmeta used to do this for their "code morphing" software but I think that a translator would need to have a very similar architecture. A more practical application might be to write a machine language parser for LLVM's intermediate representation (IR) and reuse its code optimizer to translate to PowerPC or ARM or any other supported architecture. The hardest part would probably be to implement the static single assignment form as demonstrated on this Wikipedia article.
 
Samurai_Crow said:
I'm not sure what technique Transmeta used to do this for their "code morphing" software but I think that a translator would need to have a very similar architecture. A more practical application might be to write a machine language parser for LLVM's intermediate representation (IR) and reuse its code optimizer to translate to PowerPC or ARM or any other supported architecture. The hardest part would probably be to implement the static single assignment form as demonstrated on this Wikipedia article.

My opinion is that you get much more mileage out of a complex intermediate langauge than a simple one, ie one that encapsulates as much of the source language features (superset, union) rather than one that encapsulates what they have in common (subset, intersection). The reason is that if you have a simple intermediate language then you end up doing optimization techniques from it to the result language instead of doing optimizations within the IL itself. It's harder to go from multiple instructions to fewer than it is to go from fewer to multiple, or at least while doing as good of a job in the result. It also places a lot of the transformations in the domain of a target language instead of the IL which defeats a lot of the purpose of using one to begin with.

LLVM is a backend for compilers, which (contrary to popular opinion, it would seem) should not be considered the same as a backend for translations. Many optimizations should be left to the compilers that produced the code. GCC is good enough at the front end, but not as good at the backend for ARM. Likewise, LLVM hasn't proven itself to me, with ARM results that are inferior even to GCC's. No matter how hyped it is it has to show that it actually produces good code before anyone will want to use it. Besides, how suitable is it for being used dynamically?

What I'm thinking about attempting is developing a very powerful/flexible IL that at least targets x86 and ARM well (involving a pretty large amount of functionality inbetween, but nothing too jarring). Then the translation procedure would be:

- Convert source to IL, involving 1:1 translations for most core instructions (ones that are used any statistically relevant amount of time)
- Perform "conditioning" of the IL code to expand instructions where it uses something that the destination instruction set doesn't support. Rather than have a module which understands the destination instruction set you would simply define the features both the source and destination have, and it'll expand for any cases where the source has and the destination doesn't, and some non-orthogonality cases would be defined. For instance, if the source is x86 and the destination is ARM, memory operands would be loaded into registers so only load and store IL instructions reference memory.
- Perform "optimization" of the IL code to propagate the results of previous instructions into latter instructions, to take advantage of features that the target instruction has and the source does not.
- Perform liveness analysis/dead code elimination to NOP out IL instructions whose results are not used.
- Convert IL to destination, involving 1:1 translations for most core instructions.

There may be some more steps, but that's about the gist of it.

pelrun said:
You're right, of course - the rule generation is the top level of a very large amount of preparatory work (and I would hardly fault your analysis of the deeper issues - your bona-fides are better than mine )

The main advantage of doing it this way seems to be if you specifically want to create a set of translators between N different architectures instead of just two, which is a degenerate case. Two backends only gives you two different resultant translators, but three backends gives you six.

But that's the same for anything involving an intermediate language anyway. This paper was arguing that ILs are an inferior approach.
 
Samurai_Crow said:
I'm not sure what technique Transmeta used to do this for their "code morphing" software but I think that a translator would need to have a very similar architecture.

Dissimilar architectures will incur penalties (e.g. development time, code size, memory usage or execution time) which may make a particular translation infeasible, but there's nothing specifically preventing it. (yay Turing!) In fact the author gives special consideration to some of these issues; for instance the PPC has many more registers available than the x86.

Of course, everything is easier when you have source to start with ;)
 
Exophase said:
- Convert source to IL, involving 1:1 translations for most core instructions (ones that are used any statistically relevant amount of time)
so IL contains only a subset of source instruction ? "AND reg, mem32" would not map an IL instruction ? Or you have also intermediate 1:1 translations ?
Exophase said:
- Perform "conditioning" of the IL code to expand instructions where it uses something that the destination instruction set doesn't support. Rather than have a module which understands the destination instruction set you would simply define the features both the source and destination have, and it'll expand for any cases where the source has and the destination doesn't, and some non-orthogonality cases would be defined. For instance, if the source is x86 and the destination is ARM, memory operands would be loaded into registers so only load and store IL instructions reference memory.
if you read "AND mem32, reg", you directly generate something like IL_store32(IL_and(IL_reg(...), IL_load32(mem32)), mem32) or by transforming from IL_and_reg_m32(...) ?
Exophase said:
- Perform "optimization" of the IL code to propagate the results of previous instructions into latter instructions, to take advantage of features that the target instruction has and the source does not.
lea eax, [eax + edx*4]; mov ebx, [eax] ; mov ecx, [eax+4] ==> add r5, r1, r2, LSL #2; str r3, [r1, +r2 LSL #2]; str r4, [r5, #4] ?

i'm interested with your project.
 
hlide said:
so IL contains only a subset of source instruction ? "AND reg, mem32" would not map an IL instruction ? Or you have also intermediate 1:1 translations ?

No, it contains a superset. Otherwise it wouldn't be able to support 1:1 translations.

hlide said:
if you read "AND mem32, reg", you directly generate something like IL_store32(IL_and(IL_reg(...), IL_load32(mem32)), mem32) or by transforming from IL_and_reg_m32(...) ?

IL is an assembly language:

IL_and mem32, reg ->

IL_load temp, mem32
IL_and temp, reg
IL_store temp, mem32

That's only done if the destination can't support memory operands. The thing to note here is that the intermediate language supports everything - the transformations are all performed on it.

hlide said:
lea eax, [eax + edx*4]; mov ebx, [eax] ; mov ecx, [eax+4] ==> add r5, r1, r2, LSL #2; str r3, [r1, +r2 LSL #2]; str r4, [r5, #4] ?

Not really an example I would have chosen, but pretty much, except again, still in the IL:

IL_add eax, eax, edx, LSL 2 @ converted directly from lea
IL_load eax, [eax]
IL_store ecx, [eax, 4] ->

IL_add eax, eax, edx, LSL 2
IL_load eax, [eax, edx, LSL 2]
IL_store ecx, [eax, 4] ->

IL_nop (this is done by dead code elimination stage)
IL_load eax, [eax, edx, LSL 2]
IL_store ecx, [eax, 4]

Of course I forgot two big steps:

- register allocation
- scheduling

For x86->ARM the former should be pretty easy, although it has to allocate for temporaries too.

hlide said:
i'm interested with your project.

Unfortunately right now it's just me thinking about things, more than an actual project. I'll let you know if I get anything done though.
 
Exophase said:
My opinion is that you get much more mileage out of a complex intermediate langauge than a simple one, ie one that encapsulates as much of the source language features (superset, union) rather than one that encapsulates what they have in common (subset, intersection). The reason is that if you have a simple intermediate language then you end up doing optimization techniques from it to the result language instead of doing optimizations within the IL itself. It's harder to go from multiple instructions to fewer than it is to go from fewer to multiple, or at least while doing as good of a job in the result. It also places a lot of the transformations in the domain of a target language instead of the IL which defeats a lot of the purpose of using one to begin with.

LLVM is a backend for compilers, which (contrary to popular opinion, it would seem) should not be considered the same as a backend for translations. Many optimizations should be left to the compilers that produced the code. GCC is good enough at the front end, but not as good at the backend for ARM. Likewise, LLVM hasn't proven itself to me, with ARM results that are inferior even to GCC's. No matter how hyped it is it has to show that it actually produces good code before anyone will want to use it. Besides, how suitable is it for being used dynamically?

LLVM is already designed to be modular. You can tell the optimizer to skip any optimizations it doesn't need, in fact, the optimizer is a separate library from the LLVM AsmParser, BitCode reader and BitCode writer. The reason for using a high-end optimizer like LLVM is that there very SELDOM is a 1:1 mapping of all of the functionality of the source and destination instruction sets. Breaking down a complex instruction into simpler ones, and then remapping to other complex ones is what makes LLVM so powerful.

The reason most compilers like GCC and Clang use intermediate code internally is because it is too difficult to map a set of optimizations directly to a complex instruction set like the x86. The x86 instruction set is full of kludges and only performs well because, when compared to the likes of PowerPC, it takes less cache space to hold the code on a CISC/RISC hybrid like the x86 and there are limits to how many registers can be effectively used by high-level code.

The reason the ARM isn't well supported by GCC or LLVM is that the backend isn't as commonly used as the x86 backend. The reason for that is that it has been years since the Acorn Archemedes was the development platform of choice for the Acorn Risc Machine processor. It's been hard to find an ARM-based development board that is something other than for a handheld device. Now with the advancement of ARM-based netbooks and the BeagleBoard, there is a cheap and available computer that people can tinker with to get the code working properly. The question is: "WILL THEY?"

If people don't start tinkering and programming in low-level code on the machines that are easy to write low-level code for, such as the ARM and MIPS, they will be forever at the mercy of precedent of the likes of the x86 where big-shots in New York City sit and throw money at bloated monstrosities instead of making a clean break from the past. Having a good tool-chain to develop code with is central to that.

If there were a 1:1 mapping of all the functionality of the opcodes of x86 to ARM, there would be as much power consumption problems for the ARM as there are for the x86! Also, if there were only translators from x86 to ARM there would be no incentive to switch to the ARM instruction set since they could use Visual Studio to keep writing x86 code and not worry about optimal ARM ports since the translator would take care of that.

My point is that if we keep catering to x86 only with our translator codes then if another instruction set comes along that is more efficient we'll be back to square 1. If we use an intermediate representation, we'll have a short jaunt to being able to support the new binary format.

Exophase said:
What I'm thinking about attempting is developing a very powerful/flexible IL that at least targets x86 and ARM well (involving a pretty large amount of functionality inbetween, but nothing too jarring). Then the translation procedure would be:

- Convert source to IL, involving 1:1 translations for most core instructions (ones that are used any statistically relevant amount of time)
- Perform "conditioning" of the IL code to expand instructions where it uses something that the destination instruction set doesn't support. Rather than have a module which understands the destination instruction set you would simply define the features both the source and destination have, and it'll expand for any cases where the source has and the destination doesn't, and some non-orthogonality cases would be defined. For instance, if the source is x86 and the destination is ARM, memory operands would be loaded into registers so only load and store IL instructions reference memory.
- Perform "optimization" of the IL code to propagate the results of previous instructions into latter instructions, to take advantage of features that the target instruction has and the source does not.
- Perform liveness analysis/dead code elimination to NOP out IL instructions whose results are not used.
- Convert IL to destination, involving 1:1 translations for most core instructions.

There may be some more steps, but that's about the gist of it.

You've just described how LLVM works. Except that there is custom "lowering" code that will map multiple RISC-like instructions to the CISC instruction set of the x86 and, soon, will map to the instruction set of the ARM also.
 
Samurai_Crow said:
LLVM is already designed to be modular. You can tell the optimizer to skip any optimizations it doesn't need, in fact, the optimizer is a separate library from the LLVM AsmParser, BitCode reader and BitCode writer. The reason for using a high-end optimizer like LLVM is that there very SELDOM is a 1:1 mapping of all of the functionality of the source and destination instruction sets. Breaking down a complex instruction into simpler ones, and then remapping to other complex ones is what makes LLVM so powerful.

Please reread what I've been saying about a "superset IL" vs a "subset IL." I believe that combining with a powerful instruction set is more effective than breaking down into a simple one. You don't have to have a simple language in order to support transformation to simple operations! I find that this is the backwards approach, and puts a lot of burden on developing platform specific structures that are even capable of holding optimized forms for whatever platform you're targeting.

Samurai_Crow said:
If people don't start tinkering and programming in low-level code on the machines that are easy to write low-level code for, such as the ARM and MIPS, they will be forever at the mercy of precedent of the likes of the x86 where big-shots in New York City sit and throw money at bloated monstrosities instead of making a clean break from the past. Having a good tool-chain to develop code with is central to that.

Don't start? Where have you been all this time? By the way, I personally think that MIPS is no more usable than x86 at a low level - in some regards it's better, but in other regards it's worse. It never struck me as especially "easy to write low level code for" when you consider its weak address modes, delay slots, etc...

Samurai_Crow said:
The reason the ARM isn't well supported by GCC or LLVM is that the backend isn't as commonly used as the x86 backend. The reason for that is that it has been years since the Acorn Archemedes was the development platform of choice for the Acorn Risc Machine processor. It's been hard to find an ARM-based development board that is something other than for a handheld device. Now with the advancement of ARM-based netbooks and the BeagleBoard, there is a cheap and available computer that people can tinker with to get the code working properly. The question is: "WILL THEY?"

Or at least that's what you speculate - those handheld devices you think have no market presence have been driving huge industries for years. Do you really think it's the enthusiast developer sector which ultimately drives compiler development? I think the problem is a more fundamental one of having intermediate representations that are a poor fit for what a powerful ISA like ARM can do. If LLVM is supposed to make this easy or natural then I don't see the evidence in the results, because the demand has certainly been present.

Samurai_Crow said:
If there were a 1:1 mapping of all the functionality of the opcodes of x86 to ARM, there would be as much power consumption problems for the ARM as there are for the x86! Also, if there were only translators from x86 to ARM there would be no incentive to switch to the ARM instruction set since they could use Visual Studio to keep writing x86 code and not worry about optimal ARM ports since the translator would take care of that.

Uh, okay? Who said anything about there being a 1:1 mapping of functionality between x86 and ARM opcodes? Of course translators will lose efficiency, that really isn't the point of this thread.

Samurai_Crow said:
My point is that if we keep catering to x86 only with our translator codes then if another instruction set comes along that is more efficient we'll be back to square 1. If we use an intermediate representation, we'll have a short jaunt to being able to support the new binary format.

Who said anything about catering to x86?? Although your talk of ISAs that are "more efficient" are kind of funny, since when does that dictate what closed source programs are written in?

Samurai_Crow said:
You've just described how LLVM works. Except that there is custom "lowering" code that will map multiple RISC-like instructions to the CISC instruction set of the x86 and, soon, will map to the instruction set of the ARM also.

NO, please try to understand this, having a subset "RISC-like" IL is a fundamentally different design from end to end.
 
Let me try a different approach to describe what is the advantage of a subset over a superset. In a hardware descriptor language like VHDL, there are only 3 operations supported as primitive: bitwise and, bitwise or, and bitwise inverst. You can make any opcode you like out of these 3 operations. To make an xor function you build up from this function: xor(a,b)=a' & b | a & b' . To make a half-adder (the least significant bit of an add circuit) ha:c={xor(a,b); c=a & b;} To make an full add with ripple-carry where c is carry bit incoming and d is carry bit outgoing sum:d(a,b,c)={ha2:c2(ha1:c1(a,b),c); d=c1 | c2} .

The point I'm making is that you could make a processor using only those 3 operators and, with some optimization, come up with a design that is as big and full-featured as you'd like. It's just the order and combinations of simple operations that define the big and complex ones. The only difference is how much time you want to put into the optimization.

LLVM develops good code on the x86 because a lot of time and energy has been put into the instruction combiner stages of the x86 backend. When complete, the ARM backend will do likewise. ARM is only the 5th processor instruction set to get a JIT compiler under LLVM. The first 4 were PPC32, x86, PPC64, and AMD64 in that approximate order.

Since the iPhone and iPod touch both use ARM processors, you'd better believe that LLVM's going to get Apple's best and brightest compiler designers working on the ARM backend. Adobe and Google use LLVM also. There's a lot of professional work going into it right now. That's why I think a translator is going to do well with LLVM as an optimizer and backend.

Also, who said a translator has to lose efficiency in the process of translation? With proper optimization, the code can come out smaller and faster than what it started if the destination ISA is more efficient than the source ISA. If an ARM processor were made that would clock at 3 GHz, Intel would be out of business because it would probably be faster than a Core 2 processor and fit the code in less memory.
 
Samurai_Crow said:
Let me try a different approach to describe what is the advantage of a subset over a superset. In a hardware descriptor language like VHDL, there are only 3 operations supported as primitive: bitwise and, bitwise or, and bitwise inverst. You can make any opcode you like out of these 3 operations. To make an xor function you build up from this function: xor(a,b)=a' & b | a & b' . To make a half-adder (the least significant bit of an add circuit) ha:c={xor(a,b); c=a & b;} To make an full add with ripple-carry where c is carry bit incoming and d is carry bit outgoing sum:d(a,b,c)={ha2:c2(ha1:c1(a,b),c); d=c1 | c2} .

The point I'm making is that you could make a processor using only those 3 operators and, with some optimization, come up with a design that is as big and full-featured as you'd like. It's just the order and combinations of simple operations that define the big and complex ones. The only difference is how much time you want to put into the optimization.

Eh, what? There are far more primitive capabilities than that in VHDL, I don't know what you learned.. nontheless, it's hardly comparable when you're talking about a HIGH level language whose destination is gates or even transistors. In fact, what you're describing is more of a superset language when you look at it in that regard!

Samurai_Crow said:
LLVM develops good code on the x86 because a lot of time and energy has been put into the instruction combiner stages of the x86 backend. When complete, the ARM backend will do likewise. ARM is only the 5th processor instruction set to get a JIT compiler under LLVM. The first 4 were PPC32, x86, PPC64, and AMD64 in that approximate order.

That's exactly the problem! That it takes a lot of work, specific to a particular ISA, to make a collection of propagation optimizations that not only use similar techniques as each other but can be broken down into several classes that are present here and there on different architectures. If things were this way then doing an ARM backend would have been a lot less work. Three address arithmetic, predicated execution, memory pointer arithmetic folding, even shift folding - these things are not ALL unique to ARM and furthermore if the propagation techniques designed to handle moving from a language with less to this on the IL itself then you'd have to do a lot less work to develop them. But no, with this approach you basically have to start over when going to another platform.

Samurai_Crow said:
Since the iPhone and iPod touch both use ARM processors, you'd better believe that LLVM's going to get Apple's best and brightest compiler designers working on the ARM backend. Adobe and Google use LLVM also. There's a lot of professional work going into it right now. That's why I think a translator is going to do well with LLVM as an optimizer and backend.

Yeah, LLVM produces miserable ARM code and there are several other compilers that are better, but I'd better look out because it's going to be the best when they get around to, you know, actually doing all the stuff that'd make it good. Why should I buy into this kind of hype? Is it even very competitive with the best compilers on x86?

Samurai_Crow said:
Also, who said a translator has to lose efficiency in the process of translation? With proper optimization, the code can come out smaller and faster than what it started if the destination ISA is more efficient than the source ISA. If an ARM processor were made that would clock at 3 GHz, Intel would be out of business because it would probably be faster than a Core 2 processor and fit the code in less memory.

Wow, you must know nothing about recompilation if you don't recognize the inherent added costs involved. Suffice it to say that in the many years of emulation (even user mode only, as is being described) nothing has came close to producing better than native performance across on average.
 
Exophase said:
Eh, what? There are far more primitive capabilities than that in VHDL, I don't know what you learned.. nontheless, it's hardly comparable when you're talking about a HIGH level language whose destination is gates or even transistors. In fact, what you're describing is more of a superset language when you look at it in that regard!
Sorry, I meant AHDL not VHDL. And the point is that ALL DIGITAL CIRCUITS are made up of only 3 types of gates.

As for where I learned gate-layout, I have two college degrees. One in electronic engineering technology and one in computer science.

Exophase said:
Samurai_Crow said:
LLVM develops good code on the x86 because a lot of time and energy has been put into the instruction combiner stages of the x86 backend. When complete, the ARM backend will do likewise. ARM is only the 5th processor instruction set to get a JIT compiler under LLVM. The first 4 were PPC32, x86, PPC64, and AMD64 in that approximate order.

That's exactly the problem! That it takes a lot of work, specific to a particular ISA, to make a collection of propagation optimizations that not only use similar techniques as each other but can be broken down into several classes that are present here and there on different architectures. If things were this way then doing an ARM backend would have been a lot less work. Three address arithmetic, predicated execution, memory pointer arithmetic folding, even shift folding - these things are not ALL unique to ARM and furthermore if the propagation techniques designed to handle moving from a language with less to this on the IL itself then you'd have to do a lot less work to develop them. But no, with this approach you basically have to start over when going to another platform.
LLVM is cross-platform. It generates code for Linux, MacOSX and Windows.

Most of the code in a backend is boilerplate code generated by a TableGen utility that does all of the common instructions for you. It's only the processor specific features that require custom lowering.
Exophase said:
Samurai_Crow said:
Since the iPhone and iPod touch both use ARM processors, you'd better believe that LLVM's going to get Apple's best and brightest compiler designers working on the ARM backend. Adobe and Google use LLVM also. There's a lot of professional work going into it right now. That's why I think a translator is going to do well with LLVM as an optimizer and backend.

Yeah, LLVM produces miserable ARM code and there are several other compilers that are better, but I'd better look out because it's going to be the best when they get around to, you know, actually doing all the stuff that'd make it good. Why should I buy into this kind of hype? Is it even very competitive with the best compilers on x86?
Absolutely. It's one of the best on the x86. Apple wants to replace GCC with it.
Exophase said:
Samurai_Crow said:
Also, who said a translator has to lose efficiency in the process of translation? With proper optimization, the code can come out smaller and faster than what it started if the destination ISA is more efficient than the source ISA. If an ARM processor were made that would clock at 3 GHz, Intel would be out of business because it would probably be faster than a Core 2 processor and fit the code in less memory.

Wow, you must know nothing about recompilation if you don't recognize the inherent added costs involved. Suffice it to say that in the many years of emulation (even user mode only, as is being described) nothing has came close to producing better than native performance across on average.
Usually it comes out average because emulation seldom does decent optimization. It's trying to do it in realtime and therefore misses some of the deeper forms of optimization. You mentioned dead-code elimination before. Emulators seldom do decent dead code elimination because their JIT compilers only fetch executed instructions. So there the dead code sits hogging RAM and, if your lucky, gets swapped out by the memory pager.
 
Samurai_Crow said:
Sorry, I meant AHDL not VHDL. And the point is that ALL DIGITAL CIRCUITS are made up of only 3 types of gates.

No, all digital circuits CAN be made up of three types of gates. That doesn't mean there only are three types. It's all pretty moot though. Yeah, everything's made up of elementary particles at a certain point. That doesn't mean that it's advantageous to describe compounds in hadrons and leptons. Maybe we should convert everything to and from Turing machines somehow, those are sufficient right?

Samurai_Crow said:
As for where I learned gate-layout, I have two college degrees. One in electronic engineering technology and one in computer science.

I was referring to VHDL, which you already admit was not what you meant to say in the first place. Don't get defensive over something that was your mistake.

Hey, I have two degrees two, can I automatically win the argument with that too? :O

Samurai_Crow said:
LLVM is cross-platform. It generates code for Linux, MacOSX and Windows.

*smacks forehead*

I was referring to MACHINE ARCHITECTURE, not operating system! What would have possibly made you think otherwise....

Samurai_Crow said:
Absolutely. It's one of the best on the x86. Apple wants to replace GCC with it.

Hueheh, your statement seems to suggest that GCC is the best. Bear in mind, I am talking about performance of resultant code and nothing else.

Samurai_Crow said:
Usually it comes out average because emulation seldom does decent optimization. It's trying to do it in realtime and therefore misses some of the deeper forms of optimization. You mentioned dead-code elimination before. Emulators seldom do dead code elimination because their JIT compilers only fetch executed instructions. So there the dead code sits hogging RAM and, if your lucky, gets swapped out by the memory pager.

I find your insinuation that emulators require more power than what they're emulating only because the emulators suck to be insulting. Doing whole program optimization in translation is so beyond infeasible that it's not even worth laughing about. Falling short of that, there are simply things that you will NOT see can be optimized away in the greater picture.

You don't even understand what I meant by "dead code elimination" - that doesn't mean chucking code that can't be executed, it means eliminating opcodes that no longer have a live side effect because it has been optimized out (or never existed). Again, this should have been obvious by the context >_< I do find it pretty funny that you think things like code "hogging RAM" is what's causing emulators to be slower than native though.
 
Exophase said:
I find your insinuation that emulators require more power than what they're emulating only because the emulators suck to be insulting. Doing whole program optimization in translation is so beyond infeasible that it's not even worth laughing about. Falling short of that, there are simply things that you will NOT see can be optimized away in the greater picture.

You don't even understand what I meant by "dead code elimination" - that doesn't mean chucking code that can't be executed, it means eliminating opcodes that no longer have a live side effect because it has been optimized out (or never existed). Again, this should have been obvious by the context >_< I do find it pretty funny that you think things like code "hogging RAM" is what's causing emulators to be slower than native though.

I never said that emulators suck. I consider them to be a necessary evil in the presence of closed-source code. If you're talking about translation within an emulator then I've missed the point entirely. Peephole optimization is about the only kind that is possible within an emulator due to JIT compilation.

As for whole-program translation, I started out by referring to Transmeta's Code Morphing Software which certainly did do direct binary to binary translation to get Windows code to run on its Crusoe processor. It was not a JIT as far as I know of, nor an emulator. It converted from one instruction set to another. That's what I've been talking about all along.

Lastly, hogging RAM sometimes means hogging cache due to cache misses. That certainly makes a difference in processor speed since it's wasted memory bandwidth.
 
Back
Top