Release wine + qemu


victorsavu3

Still Fresh
Joined
Apr 20, 2009
Messages
7
Would it be possible to run windows application(games) using qemu for x86 to arm translation and wine for the windows api?
 
This has come up before on the other forums, and while technically feasible, it would not likely be practical. DosBox is an X86 emulator for ARM that can barely manage a slow 486 under the Pandora's spec, and it's optimized for games. Qemu is a general purpose X86 emulator, so you're not going to get the same kinds of speed out of it you would with DosBox. Add on top of it a Win32 layer and things just get worse. Maybe, if you're lucky you'll be able to play solitaire.
 
mindlord said:
Maybe, if you're lucky you'll be able to play solitaire.
Not a chance. Do you know how much boilerplate you need to use the most basic libraries in Windows?

Hello victorsavu3 btw, I've seen you in the spacerts repository before ;)
 
I disagree with the basic sentiment of the responses on this thread. Qemu's userspace x86 to ARM translation as it stands now should be much more efficient than the ARM recompiler currently present in DOSBox, which is very slow. Userspace emulation has its own simplifications that lead to faster code and Windows programs will be higher level/spend more time in driver code (especially for 3D things) that would run more quickly due to being emulated at a high level.

Qemu is probably capable of performing around a 15-20% native speed for typical applications, so a 50-100MHz Pentium 1 wouldn't be completely out of the question. I personally believe that userspace x86 emulation can do much better. I expect an average of over 50% native speed to attainable. M-HT's static recompiler is doing 40-50% on one test application and I believe that more aggressive optimization can have significant gains.
 
Coming from an emulation Guru such as yourself I'll concede the point. So if you had to guess, what might we see out of wine + qemu combination? Diablo, Half-Life, Tomb Raider, Warcraft II, Carmageddon? Or is Direct X and Open GL too ambitious?
 
Exophase said:
I disagree with the basic sentiment of the responses on this thread. Qemu's userspace x86 to ARM translation as it stands now should be much more efficient than the ARM recompiler currently present in DOSBox, which is very slow. Userspace emulation has its own simplifications that lead to faster code and Windows programs will be higher level/spend more time in driver code (especially for 3D things) that would run more quickly due to being emulated at a high level.

Qemu is probably capable of performing around a 15-20% native speed for typical applications, so a 50-100MHz Pentium 1 wouldn't be completely out of the question. I personally believe that userspace x86 emulation can do much better. I expect an average of over 50% native speed to attainable. M-HT's static recompiler is doing 40-50% on one test application and I believe that more aggressive optimization can have significant gains.
Well, if you say so, it must be right ;)
I was more under the impression that
1. Many x86-instructions, primarily SSE-related things, would represent a very large number indeed (say, 15) of matching ARM instructions to perform the same operation.
2. Many of the performance hacks used in the winelibs would be plain inefficient and might actually slow stuff down, since we're dealing with two completely different architectures with different endianness etc that will require a good heap of translation to make them work together, and where you can't use tricks like shared memory and whatnot (I don't really know much about this stuff to be honest).

EDIT: @Mindlord Wine already has OpenGL wrappers and DX-OGL translation so this shouldn't be an issue, at least if we get some kind of OGL2 compatibility library around the current OGLES2 driver on the Pandora. It will be slow, too, of course.
 
dflemstr said:
Well, if you say so, it must be right ;)
I was more under the impression that
1. Many x86-instructions, primarily SSE-related things, would represent a very large number indeed (say, 15) of matching ARM instructions to perform the same operation.

SSE wasn't available until Pentium 3, and wasn't commonly used until it was well supported by AMD as well. So the programs that require SSE will be outside of the realistic scope for emulation. MMX instructions would be more of a realistic presence, but really both it and SSE 1 map pretty decently to NEON, for the most part.

There are some complex x86 instructions but they're not used very much, at least in software released after a certain point, because they don't really perform well on x86 anymore either. Even then none of them are really so complex that they couldn't be emulated by more than a few ARM instructions, except those with "rep" prefixes that don't execute in constant time on x86 anyway.

There are things that x86 programs do routinely that don't map directly to ARM. This includes memory operands (especially destination, ie read-modify-write instructions or RMW), call/ret which pushes/pops the stack, 8/16bit operations, 32bit immediate operands and address offsets, and some flag mismatches. These things mean that some x86 instructions will convert to two or more ARM instructions. With this conversion you would get results like what M-HT has with his static recompilation, which for his tests were 40-50% native. The thing is, ARM can do many things that x86 can't. It has three operand arithmetic and folded shifts which can eliminate a move or a shift instruction in x86 under relatively common circumstances. It has conditional execution which can eliminate small one armed (and sometimes two armed) if statements that got converted to branches. It has more registers which could possibly map to fixed locations on the stack eliminating some memory accesses. It has load/store multiple instructions which could fold sequences of load/store instructions on x86. Pre-post increment/decrement can also be folded into memory accesses. With aggressive optimization I would imagine that you can improve the performance of translated code by a noticeable amount.

dflemstr said:
2. Many of the performance hacks used in the winelibs would be plain inefficient and might actually slow stuff down, since we're dealing with two completely different architectures with different endianness etc that will require a good heap of translation to make them work together, and where you can't use tricks like shared memory and whatnot (I don't really know much about this stuff to be honest).

notaz already corrected you on the endianess. I don't know what "hacks" Wine uses and I doubt any of them would be platform specific. High level emulation of OS functions are going to be faster than low level emulation of the machine code used to implement it, almost all of the time. Unless it represents hardware that's not compatible with something you have then it should be similar to "native" speed because that's what it is, native code.

dflemstr said:
EDIT: @Mindlord Wine already has OpenGL wrappers and DX-OGL translation so this shouldn't be an issue, at least if we get some kind of OGL2 compatibility library around the current OGLES2 driver on the Pandora. It will be slow, too, of course.

If you can emulate a 200MHz Pentium then there will start being some OpenGL, Direct3D, and Glide games that can meet that spec. This is at the higher end of the feasibility spectrum but I wouldn't write it off unconditionally.

Let me talk a little bit about what I called "user mode emulation" and why it allows for much better performance. Obviously OS HLE can improve things, like I mentioned, but there are two main things that make the burden of machine code emulation lighter than it is for full blown "system emulation." One is that you don't have to emulate physical device access, which means that the address space just consists of memory. Since the OS usually abstracts memory in some way you just need an OS on the other end that's compatible with what the program expects. In some cases it's even possible to arbitrarily relocate the program entirely. Memory emulation is one of the biggest drains in system emulation and being able to reduce it to native accesses improves things dramatically. The other benefit is that you don't have to emulate interrupts, meaning that you don't have to count cycles and abort when they run out. Cycle counting is both a performance drain and a pain to work with while allowing the code to be optimized.

The down side to all this is that you must emulate the OS at a high level. Fortunately, Wine is already doing exactly that.
 
it seems like people have experimented with this on the zaurus:

"It is possible to run windows programs in Linux on ARM but it takes more than just wine. Way back, when the Sharp Zaurus was big news some people managed to combine wine and qemu. Qemu was providing the processer emulation while wine provided the windows libraries. If you search you may still be able to find a tutorial about how to make that work. I have no idea what to expect performance wise but it would be an interesting experiment." link

here is another link: http://www.oesf.org/forum/index.php?showtopic=14829

But if i understand correctly, they have just tried to run a minimal x86-linux with wine on, and that would mean all slow device, memory and interruption emulation still would have to be made? or am i missing something here? probably.
 
Good find..

What I've been wondering is how hard it is to write an x86 Linux to ARM Linux user mode emulator that is capable of running Wine. Since Wine is itself user mode it should work. It's unclear to me whether or not Qemu as it was used in this example was user mode only or full system. At any rate, this was a long time ago so things might be very different now (when it comes to performance).
 
I would be very, very happy if we could get anywhere near a 90MHz Pentium with this. All my old Maxis classics in my pocket... and Age Of Empires says it will run on a 90MHz processor... :ph34r:
Of course, we'll wait to see if it actualy works first :lol:
 
Given that if it works (i didn't get it to work sometime back, but i'm hoping that somebody will), the qemu+wine approach that is currently feasible is essentially that arm-qemu runs (emulates CPU and translates kernel calls) x86-wine(+x86 linux libs) which runs x86-windows-app (and x86 windows libs)
- this ofcourse makes things much easier (and faster) than full system emulation (no need to emulate kernel code & hardware etc)
- but performance-wise i would like to see a much more optimized case of arm-wine (arm libs) + qemu + x86-win-app ...
- maybe even so that the free wine libs would be compiled for arm and only the x86 application would be emulated by qemu which would need to translate between x86-windows and ARM-linux ABI calls (between arm wine libs and the x86 app) (and x86 windows kernel calls to arm-wine).
- This would allow even the wine DirectX to OpenGL translation layer to run natively - although it would need to be modified to be a DirectX to OGL ES layer.

This all would ofcourse require lots and lots of work and pretty much a fusion of wine and qemu, so for now it's just a dream...
 
With ARM Netbooks becoming increasingly common. It wouldn't surprise me to find an ARM Wine/Qemu fork showing up in the repositories in the next couple of years.
 
urjaman said:
Given that if it works (i didn't get it to work sometime back, but i'm hoping that somebody will), the qemu+wine approach that is currently feasible is essentially that arm-qemu runs (emulates CPU and translates kernel calls) x86-wine(+x86 linux libs) which runs x86-windows-app (and x86 windows libs)
- this ofcourse makes things much easier (and faster) than full system emulation (no need to emulate kernel code & hardware etc)
- but performance-wise i would like to see a much more optimized case of arm-wine (arm libs) + qemu + x86-win-app ...
- maybe even so that the free wine libs would be compiled for arm and only the x86 application would be emulated by qemu which would need to translate between x86-windows and ARM-linux ABI calls (between arm wine libs and the x86 app) (and x86 windows kernel calls to arm-wine).
- This would allow even the wine DirectX to OpenGL translation layer to run natively - although it would need to be modified to be a DirectX to OGL ES layer.

This all would ofcourse require lots and lots of work and pretty much a fusion of wine and qemu, so for now it's just a dream...

That would ultimately be the best course of action, but I wonder how much overhead the Wine translation costs in typical applications. I would suppose that you'd need Linux OGL to OGL ES translation (we already have something like that that can be used, from a few sources) for Linux to Linux translation.
 
As often an interesting post from Exophase :)

As far as complex instructions to emulate go, I'd like to add x87 and its 80-bit format. I don't know if x87 instructions were often used by programs people would like to see, but it can be a real problem. Of course most of the time, going to 64-bit might be enough, but I'm sure some programs will misbehave due to reduced accuracy.

Exophase said:
Memory emulation is one of the biggest drains in system emulation and being able to reduce it to native accesses improves things dramatically.
That's from my point of view the main benefit from high-level simulation of the OS.
 
Exophase said:
That would ultimately be the best course of action, but I wonder how much overhead the Wine translation costs in typical applications. I would suppose that you'd need Linux OGL to OGL ES translation (we already have something like that that can be used, from a few sources) for Linux to Linux translation.
Even for x86 on x86, OpenGL calls are going through a wrapper in Wine due to calling conventions being different. That can become a problem, since many OpenGL calls are basically doing nothing and you end up spending a measurable amount of time wrapping calls.
 
What no-one seems to be suggesting here is using LLVM for userspace recompilation. LLVM would be quite capable of recompiling a binary, optimize it in the process and change the calling convention, and shoving the resulting ARM code out in a very JIT compiler fashion. WINE could be ported and be used just like on a regular x86 machine, while the Windows binaries would be run natively on ARM.
This whole idea of course assumes someone has written or is willing to write an x86 frontend and an ARM backend for LLVM. All it would really do is add a little to the start-up time. There will naturally be incompatabilities with applications that read their own or other applications memory, and applications depending on a certain memory layout on a higher level than assembly (like all Windows applications do and thus we have the wine-preloader, which in this case could be mostly eliminated)

I'd like to know people's thoughts on this. It sounds feasible, but how many applications would be riddles with incompatibilites?

EDIT: There is an ARM backend, though it has a few known issues. Also, by applications reading their own/others' memory, I meant by calls such as ReadProcessMemory.
 
LLVM is brought up all the time whenever binary translation is mentioned. Unfortunately, I haven't seen any good numbers to show its relative strength.

LLVM-GCC produces good results, usually pretty narrowly within GCC (sometimes winning, sometimes losing). GCC's x86 output quality isn't really industry leading, although it's pretty good compared to what it does for other archs. LLVM-GCC being competitive is not a small feat, but it also doesn't mean that binary translation quality in this case will be stellar. We don't really know just what kind of overhead is added in the translation of x86 code to LLVM bytecode and how much can be recovered in the translation/optimization from LLVM to ARM. A lot of the heavier optimizations useful in something like LLVM-GCC won't apply here, and I don't think the ARM generation for LLVM is very mature if the quality of LLVM-GCC for ARM is any indication.

Incidentally, there was a summer of code project a couple years ago that was trying to replace QEMU's code generation with LLVM's. You can see the results here:

http://code.google.com/p/llvm-qemu/

Nothing very good came of this. The end result was still slower than QEMU. Although this would reflect older LLVM it should also be noted that QEMU has changed a lot since then as well, although I haven't seen before/after benchmarks. If someone can find these or any others I'd be interested in knowing.

Really, LLVM, QEMU, something else.. it doesn't really matter, so long as good translated code is being produced and there's a layer for OS level translation. LLVM probably doesn't have all of the facilities QEMU has for managing realtime emulation and code block management, so if you wanted to go with something existing you would probably still have to go with a hybrid.

EDIT: Please see this thread too:

http://markmail.org/message/ynub23sijfb ... te:results
 
admittedly I don't know how ARM passes arguments etc, however from what I read, the calling conventions are different from x86. Now the goal of using LLVM would not be raw power, but more the ability to recompile to run the code natively. It doesn't have to be JIT; it could easily be a full recompilation + optimization. The idea is to eliminate the need for any layer between WINE and the application, or changes to WINE that aren't purely because of compilation/runtime issues - not cross-arch related.
In the case of qemu-llvm, I can imagine that a large performance hit would come from not having all the application code at start time, thus you can't optimize as agressively.

The only remaining piece of the puzze would be the x86 frontend for LLVM, which is a huge piece of work - then comes all the stability/optimization work of course.
I'm not at all saying this is a good idea, I'm just saying it's a solution. The idea of running x86 binaries on an ARM processor is ridiculous enough in itself, let alone binaries from Windows. All the good games have been ported anyway ;)
 
zhasha said:
Admittedly I don't know how ARM passes arguments etc, however from what I read, the calling conventions are different from x86.

Calling convention is the least of your concerns when performing binary translation. Except for user mode emulation layers it's a non-issue - how the program does it internally doesn't matter.

zhasha said:
Now the goal of using LLVM would not be raw power, but more the ability to recompile to run the code natively.

That's what QEMU does. What do you mean by raw power exactly? You want recompilation because it's faster, correct? Otherwise there should be no reason not to be content with interpretive emulation, which is probably better in all other ways.

zhasha said:
It doesn't have to be JIT; it could easily be a full recompilation + optimization.

Actually, static recompilation has many complications that make it less ideal than dynamic and with poorer compatibility. Indirect branch search sets can't be directly known, and while heuristics exist they may not resolve all targets and will most likely cause more code to be generated than what is branched to. Probably the more you achieve the complete target set the more false positives (and hence unused "code") you'll end up with. Dynamic loading and self modifying code can't be handled at all, which is a more fundamental issue than may be evident. If intent on storing translated executables to disk then you'd also be passing a large amount of the conversion overhead bloat to your filesystem.

zhasha said:
The idea is to eliminate the need for any layer between WINE and the application, or changes to WINE that aren't purely because of compilation/runtime issues - not cross-arch related.

Converting the code to LLVM and recompiling it does nothing to accomplish this. In fact, such a thing wouldn't really work because WINE operates on x86 Windows executables and would almost definitely fail to understand a Windows executable that has been converted to ARM. To do things in this order (x86 Windows -> x86 ARM -> x86 Linux) you would need a completely different WINE implementation.

On the other hand, x86 Windows -> x86 Linux -> ARM Linux is a path that can work. By performing recompilation dynamically you can allow for this. WINE converts the application in memory to x86 Linux, and as this program is executed it is converted to ARM Linux.

zhasha said:
In the case of qemu-llvm, I can imagine that a large performance hit would come from not having all the application code at start time, thus you can't optimize as agressively.

This is a naive assumption and is most certainly not the root cause. You can't compare binary translation to backend compilation. LLVM is a language designed to facilitate compiler output, not provide an intermediary between different machine codes - it just doesn't model certain low level details well enough. Typical compiler output is actually a lot simpler than machine code and contains more useful information. You won't have this information when converting from x86 to LLVM, nor will you have something that nicely models flags and other machine behavior in a way that LLVM's ARM code generator can use. Full program optimization is nice at a high level, but when you're dealing with machine code most of the important context is sitting in registers that will usually have a limited liveness window for any particular allocation. Besides that, you can recompile adaptively to achieve a lot of "whole program" like optimizations, and you can recompile greedily to achieve things like inlining.

Actually, the big hit from llvm-qemu comes from using the intermediate macros to generate LLVM code from, rather than going straight from x86 (which would have been much, much more work and you can see why no one has done this yet). But what's telling is that it couldn't even do better than standard QEMU. Think about what's happening: normally QEMU (as of then) would compile blocks of code by pasting GCC generated function bodies together. So each code block is compiled in isolation. llvm-qemu would paste llvm-gcc generated function bodies together, then run the llvm optimizer on the entire block. This should have provided some level of register allocation, liveness analysis, propagation, and so on, and yet it was still worse than the original version. It could have been llvm-gcc's fault, but I have to wonder.

zhasha said:
The only remaining piece of the puzze would be the x86 frontend for LLVM, which is a huge piece of work - then comes all the stability/optimization work of course.
I'm not at all saying this is a good idea, I'm just saying it's a solution.

But so is using QEMU/WINE, and it's a solution where much more of the work is already done. In fact, it might be possible to run it in this manner right now, without any coding being necessary. Someone should try it.

zhasha said:
The idea of running x86 binaries on an ARM processor is ridiculous enough in itself, let alone binaries from Windows. All the good games have been ported anyway ;)

Hm, I wouldn't count on most people agreeing with you, maybe in recent years some big name games have been ported but that isn't relevant. We're talking about games that ran on mid to late 90s PC hardware, how many of those were ported to Linux? Or do you think all of them are bad..?
 
Back
Top