Actually, the big hit from llvm-qemu comes from using the intermediate macros to generate LLVM code from, rather than going straight from x86 (which would have been much, much more work and you can see why no one has done this yet). But what's telling is that it couldn't even do better than standard QEMU. Think about what's happening: normally QEMU (as of then) would compile blocks of code by pasting GCC generated function bodies together. So each code block is compiled in isolation. llvm-qemu would paste llvm-gcc generated function bodies together, then run the llvm optimizer on the entire block. This should have provided some level of register allocation, liveness analysis, propagation, and so on, and yet it was still worse than the original version. It could have been llvm-gcc's fault, but I have to wonder.