X86 Linux & Arm Linux Apps


Exophase said:
It's a common misconception that a program's code space can be exhausted statically by following the execution tree.
Ah, quite. I see what you mean now, yes. Thank you.
It's an interesting problem though, because I've been debugging and reverse engineering with IDA Pro for a few years now, and it almost always gets it right, even when dealing with indirect branches.
 
Last edited by a moderator:
WizardStan said:
Exophase said:
It's a common misconception that a program's code space can be exhausted statically by following the execution tree.
Ah, quite. I see what you mean now, yes. Thank you.
It's an interesting problem though, because I've been debugging and reverse engineering with IDA Pro for a few years now, and it almost always gets it right, even when dealing with indirect branches.

Debugging and reverse engineering what? If you have source then it'll probably know the branch targets. If you have symbols then it'll probably be able to get the branch targets. Even with heuristics it'll probably get the branch targets a good deal of the time. But without "running" the result how can you know it got everything?

That aside, dynamically modified, loaded, and generated code really does exist and static recompilers really are incapable of handling it.
 
Last edited by a moderator:
Exophase said:
Debugging and reverse engineering what?
Old DOS games, mostly. No idea how it does it, the executables don't have any debugging information. My most recent project I was working with someone on was the World Of Xeen engine. I "know" it's complete because I traversed the generated output to see if there were any orphaned blocks. There were a few: between the two of us, we fixed them. It didn't take more than a couple hours.
Don't get me wrong, I wasn't trying to suggest it could be done entirely automatically in all situations (although that may have been an unintended result of the debate), but it can get close enough that a human could step in and fill in the gaps, I think, resulting in something superior to what a dynamic recompiler can achieve.
 
Last edited by a moderator:
WizardStan said:
Old DOS games, mostly. No idea how it does it, the executables don't have any debugging information. My most recent project I was working with someone on was the World Of Xeen engine. I "know" it's complete because I traversed the generated output to see if there were any orphaned blocks. There were a few: between the two of us, we fixed them. It didn't take more than a couple hours.
Don't get me wrong, I wasn't trying to suggest it could be done entirely automatically in all situations (although that may have been an unintended result of the debate), but it can get close enough that a human could step in and fill in the gaps, I think, resulting in something superior to what a dynamic recompiler can achieve.

Sure, and I hope you don't get the idea that I was saying static recompilation isn't useful for this. If you're working semi-manually on a case-by-case basis then static recompilation is a good way to go. MH-T has proven this several times now. But if you want to write an emulator that can work fully automatically with any kind of programs then that's a different story.
 
Last edited by a moderator:
Exophase said:
Sure, and I hope you don't get the idea that I was saying static recompilation isn't useful for this. If you're working semi-manually on a case-by-case basis then static recompilation is a good way to go. MH-T has proven this several times now. But if you want to write an emulator that can work fully automatically with any kind of programs then that's a different story.
Bringing the topic full circle then, I wonder about the difference between a general purpose x86 emulator (whether it have static or dynamic recompiling) vs something tuned specifically to handle Linux binaries. For the most part this is irrelevant and you should just compile from source, but I'm thinking about the few commercial games without source code that have released both a Windows and a Linux version. Could the Linux version be emulated in a way sufficiently superior to the Windows version to make it worthwhile, I wonder.
 
Last edited by a moderator:
Exophase said:
You're thinking of Harvard address spaces, not Harvard internal buses (ie, icache/dcache, they both go to the same bus at some level up). Few architectures that are that interesting to recompile actually employ split address spaces. 68k certainly doesn't.
Well it sort of does with it's function code pins, which indicate if it's fetching code or reading/writing data. So different address spaces are possible, I think this was used by some arcades for protection.

WizardStan said:
It's an interesting problem though, because I've been debugging and reverse engineering with IDA Pro for a few years now, and it almost always gets it right, even when dealing with indirect branches.
IDA is good at certain things, it knows how certain compilers generate code and can detect various libraries. But try giving it megadrive game ROM for example, it gets lost pretty quickly.
 
Last edited by a moderator:
If this thread was indeed hijacked, then I take full responsibility because I wanted to talk about this.

Exophase said:
I've written dynamic recompilers. People bring up static recompilers a lot, but if they're so viable then why do so few real static recompilers exist, much less ones that are completely automatic and run a significant amount of real software? Sometimes one will come along that calls itself a static recompiler but it's actually just a deeply recursive dynamic one. It still generates code at runtime - as much as it can, but it'll stop when it hits a dead end and has to be capable of picking up again later.

...

Of course, this still only works if your code is resident at load time. Anything dynamically loading, generating, or modifying code will fail. So if your program, say, uses overlays or loads DLLs in a way that's not determined by the executable structure at compile time, then it'll fail. Like for instance, plugins in a media player or emulator.

...
Maybe the reason there are so few static recompilers is: A) because it's hard to get right. and B) because people like you discourage it. I did get the idea that you were saying static recompilation isn't useful on the whole. I also think that static recompilation can be completely automatic, if all of the information is available, meaning all usable plugins and whatnot are present. Perhaps the solution has just not appeared yet, likely because dynamic recompilation is a proven, viable solution. Mixing the two techniques might be useful as well.

dflemstr said:
Provided that rand() has been seeded properly, can you tell what this program will output and which functions will be called under which conditions? Can a good recompiler even detect the branch and all of its alternatives without executing the code?
Yes it can. By performing the bitwise 'and', you limited the possibilities to two. I know that was a simple example, but that is similar to what useful code will do. You never want your program to execute from random memory locations, but you might want it to execute randomly from a selection of memory locations.

BTW, it appears you have gone to the dark side. It was the cookies, wasn't it.
 
Last edited by a moderator:
dflemstr said:
But a static recompiler does not know where those memory locations point at, that the array of functions even holds functions and that the array is of length 2 only.

Mr.Confuzed said:
BTW, it appears you have gone to the dark side.
How so? :p
By switching sides on this particular topic.

I don't understand your first comment. But the recompiler does know that the array of functions holds functions because the code will make a call using one of the two pointers, while passing a return address (on the stack I would assume). Also, it doesn't matter if the array is longer or not, because any extra elements cannot be used by the segment of code in question. So if there were fifty function pointers, and the code only used two, then the recompiler could actually weed out the useless ones. Not that that's going to happen.
 
Last edited by a moderator:
Mr.Confuzed said:
I don't understand your first comment. But the recompiler does know that the array of functions holds functions because the code will make a call using one of the two pointers, while passing a return address (on the stack I would assume). Also, it doesn't matter if the array is longer or not, because any extra elements cannot be used by the segment of code in question. So if there were fifty function pointers, and the code only used two, then the recompiler could actually weed out the useless ones. Not that that's going to happen.
You're talking about a very smart compiler, you know... However, it all goes back to my previous point. The execution of the code "rand() & 1" will probably take place in a register when using an optimizing compiler, so a recompiler has to keep track of what specific registers contain and what limits the code puts on them (in this case, that the value is limited to being 0 or 1), essentially forcing it to execute/interpret the code which makes a dynamic recompiler equally suited for the job, and you therefore don't gain anything by statically recompiling as a dynamic recompilation probably would yield a much better result, since a static recompiler has to *predict* code execution and a dynamic recompiler just has to *watch* execution.

And if we exchange the quoted code above for "rand() & rand()", things become more interesting (as you kinda pointed out). rand() might not be a great example in this case to use, but imagine having some functions calculating something more sensible instead, preferably even located in a separate binary.
 
Last edited by a moderator:
dflemstr said:
You're talking about a very smart compiler, you know... However, it all goes back to my previous point. The execution of the code "rand() & 1" will probably take place in a register when using an optimizing compiler, so a recompiler has to keep track of what specific registers contain and what limits the code puts on them (in this case, that the value is limited to being 0 or 1), essentially forcing it to execute/interpret the code which makes a dynamic recompiler equally suited for the job, and you therefore don't gain anything by statically recompiling as a dynamic recompilation probably would yield a much better result, since a static recompiler has to *predict* code execution and a dynamic recompiler just has to *watch* execution.
...
I never said it would be easy, but there is a logical approach that should work most, if not all of the time. My main point here is that there is nothing wrong with static recompilation. Instead of telling people, "don't bother, it isn't worth it," maybe saying, "it's insanely hard and hasn't been properly done before," is better?

Hasn't MH-T or whatever his name was, proven that static recompilation can produce better code due to not having time restraints?
 
Last edited by a moderator:
Mr.Confuzed said:
I never said it would be easy, but there is a logical approach that should work most, if not all of the time. My main point here is that there is nothing wrong with static recompilation. Instead of telling people, "don't bother, it isn't worth it," maybe saying, "it's insanely hard and hasn't been properly done before," is better?

Hasn't MH-T or whatever his name was, proven that static recompilation can produce better code due to not having time restraints?
Now look, the way you put it, it seems that you think it's OK for the compiler to do anything it wishes with the binary in order to statically recompile it. So, wouldn't, according to you, a dynamic recompiler that lets the user execute a program slowly but using a lot of optimizations, and that then saves the resulting binary to a cache file, be a very good static recompiler?

I think there's a definition problem here.

Also, due to the halting problem, it is impossible to fully understand what a binary does without executing it (it's mathematically proven) so even if there's a static recompiler that can do wonderful things, there'll always be a dynarec that does a better job.
 
Last edited by a moderator:
Mr.Confuzed said:
Maybe the reason there are so few static recompilers is: A) because it's hard to get right. and B) because people like you discourage it. I did get the idea that you were saying static recompilation isn't useful on the whole.

A) Why harder than dynamic? In fact, since there's no point trying to track self-modifying code you can do nothing about it may even be easier than dynamic, not counting where it's not possible.
B) First of all, for every "person like me" there have got to be 20 people like you insisting that it'd be the best thing since sliced bread if someone just tried. Second, people saying something can't be done isn't going to deter someone with practical experience who knows better - if anything it would encourage them to prove the person wrong.

Mr.Confuzed said:
I also think that static recompilation can be completely automatic, if all of the information is available, meaning all usable plugins and whatnot are present. Perhaps the solution has just not appeared yet, likely because dynamic recompilation is a proven, viable solution. Mixing the two techniques might be useful as well.

If you're giving it information then it's not fully automatic you know.. but you'd have to tell it a hell of a lot more than "this DLL is going to be loaded by the program, okay?".. DLL offsets are resolved at runtime, you need to know where that DLL is going to be loaded into memory and reconcile that ahead of time. Somehow.

Mr.Confuzed said:
Yes it can. By performing the bitwise 'and', you limited the possibilities to two. I know that was a simple example, but that is similar to what useful code will do. You never want your program to execute from random memory locations, but you might want it to execute randomly from a selection of memory locations.

The rand was actually totally irrelevant. The recompiler won't be able to know the addresses of a and b in the first place, if they're not pointed to directly. Heuristics might determine this, but then again they might not.

Mr.Confuzed said:
I don't understand your first comment. But the recompiler does know that the array of functions holds functions because the code will make a call using one of the two pointers, while passing a return address (on the stack I would assume). Also, it doesn't matter if the array is longer or not, because any extra elements cannot be used by the segment of code in question. So if there were fifty function pointers, and the code only used two, then the recompiler could actually weed out the useless ones. Not that that's going to happen.

All of these things happen at runtime, dude. A STATIC recompiler can't know any of this. Even if it can tell something is a pointer it won't be able to know what it's pointing to. Sure, jump tables are often bounded, but it's naive to think that it'll be so obvious to determine what the possible targets can be because there'll have been an AND nearby.

Even then, that kind of range analysis is pretty heavy stuff for a compiler. Look at GCC. If you do switch(value & 0x3) where you've defined cases 0 through 3 it'll still check to see if the switch input is >= 4.

Mr.Confuzed said:
I never said it would be easy, but there is a logical approach that should work most, if not all of the time. My main point here is that there is nothing wrong with static recompilation. Instead of telling people, "don't bother, it isn't worth it," maybe saying, "it's insanely hard and hasn't been properly done before," is better?

You're not making your point very well. I've given legitimate weaknesses with static recompilation, that's all. What you've failed to do is give any legitimate strengths. If someone wants to go try one that's their prerogative. But assuming that it hasn't been done well just because it's hard is naive.

Mr.Confuzed said:
Hasn't MH-T or whatever his name was, proven that static recompilation can produce better code due to not having time restraints?

How do you think he "proved" such a thing? By starting with a dynamic recompiler first and showing that he was able to do better optimizations this way? Because that's not what happened. Your assumptions are way off, because M-HT's static recompilers employ very little optimization. There's nothing that they do that can't be done quickly in a dynamic recompiler.
 
Last edited by a moderator:
You need a huge amount of 'proof' even for very simple programs to guarantee that execution is gonna be right. And of course, the moment that -any- dynamic code appears you'l need to include a fully ISA compliant runtime model (interpreter and/or dynarec) to handle these cases. Static (or, static-assisted) analysis of a program can be very useful to help a dynamic recompiler (or alternately a block graph that is built at runtime and stored to flash/hdd for faster execution next time). The problem with 'full' system emulators (like consoles and stuff) is that various external factors make it MUCH MORE complex (a write to memory can generate an exception/interrupt and lead to a change of the program flow) etc. For a user mode only, limited (no interrupts or memory mapped registers) a static pre-pass can help a lot -- but you will need a full dynarec for runtime generated/loaded code as well ;p
 
Oh god, what have I gotten myself into. :lol:

@dflemstr: I don't think there is a definition problem. A static recompiler shouldn't be executing code with data from the user. It might have to execute small portions of code for all possible inputs. Not sure about that though.

Regarding the halting problem, maybe you can prove me wrong, but it doesn't seem to be the same problem. It describes a function that determines if a process ends. The goal here is to translate the process into another language. It's probably just that my brain is fried...

Exophase said:
A) Why harder than dynamic? In fact, since there's no point trying to track self-modifying code you can do nothing about it may even be easier than dynamic, not counting where it's not possible.
...
Even then, that kind of range analysis is pretty heavy stuff for a compiler. Look at GCC. If you do switch(value & 0x3) where you've defined cases 0 through 3 it'll still check to see if the switch input is >= 4.
This heavy stuff is why it's harder.

Exophase said:
First of all, for every "person like me" there have got to be 20 people like you insisting that it'd be the best thing since sliced bread if someone just tried. Second, people saying something can't be done isn't going to deter someone with practical experience who knows better - if anything it would encourage them to prove the person wrong.
You could be right. I'm not into psychology right now, and you encouraged me to speak up. Way to go! :D

Exophase said:
If you're giving it information then it's not fully automatic you know.. but you'd have to tell it a hell of a lot more than "this DLL is going to be loaded by the program, okay?".. DLL offsets are resolved at runtime, you need to know where that DLL is going to be loaded into memory and reconcile that ahead of time. Somehow.
So if DLL offsets are resolved at runtime, can't you do the same in the recompiled version?

Exophase said:
The rand was actually totally irrelevant. The recompiler won't be able to know the addresses of a and b in the first place, if they're not pointed to directly. Heuristics might determine this, but then again they might not.
I would think that the heuristics would work most of the time. If not, fall back on a dynamic recompiler, or fix it manually, or try to write better heuristics.

Exophase said:
You're not making your point very well. I've given legitimate weaknesses with static recompilation, that's all. What you've failed to do is give any legitimate strengths. If someone wants to go try one that's their prerogative. But assuming that it hasn't been done well just because it's hard is naive.
...
How do you think he "proved" such a thing? By starting with a dynamic recompiler first and showing that he was able to do better optimizations this way? Because that's not what happened. Your assumptions are way off, because M-HT's static recompilers employ very little optimization. There's nothing that they do that can't be done quickly in a dynamic recompiler.
I'm just going to assume you're right because it doesn't matter to my argument. Here's the obvious legitimate strength: Statically compiled code will run faster than equally efficient dynamically compiled code, because dynamically compiled code has to be compiled before it is run. Maybe MH-T's code wasn't any better, but I think it could have been, if some good code analysis was done.

@drkllRaziel: Does anyone have a good idea how often self-modifying code springs up? I thought it was a discouraged practice. Keep in mind, I'm not referring to dynamically loaded code, where the code to be loaded is available. I'm also not talking about code that modifies the data portion of other code.
 
Last edited by a moderator:
Mr.Confuzed said:
This heavy stuff is why it's harder.

The point I was trying to make is that those kinds of optimizations often don't work out. Here you can't afford to have it not work.

Mr.Confuzed said:
This heavy stuff is why it's harder.
So if DLL offsets are resolved at runtime, can't you do the same in the recompiled version?[/quote]

So now you want the general purpose recompiler to know things about the OS? That aside, you have no idea what DLLs are going to be loaded, and any combination of them would change the address resolution. So no, you can't.

Mr.Confuzed said:
I would think that the heuristics would work most of the time. If not, fall back on a dynamic recompiler, or fix it manually, or try to write better heuristics.

We were talking about an automatic solution, "fix it manually" is not an option. "Try to write better heuristics" is a silly answer to a hypothetical question ;P Usually when someone was trying a static recompiler it's more because they didn't want to do a dynamic one than because there was a real benefit.

Mr.Confuzed said:
I'm just going to assume you're right because it doesn't matter to my argument. Here's the obvious legitimate strength: Statically compiled code will run faster than equally efficient dynamically compiled code, because dynamically compiled code has to be compiled before it is run. Maybe MH-T's code wasn't any better, but I think it could have been, if some good code analysis was done.

Your argument has two flaws. One, recompiled code takes time to load from disk or flash or whatever that could very well exceed the time it'd take to dynamically recompile it. Two, performance measurements should be asymptotic, which completely weeds out any startup costs. So it might start faster (or not..) but it wouldn't "run" faster, which I guarantee you is what the user cares more about.

You go on about analysis and improvements but it's all just vague. It's easy to talk about something being better when you're not pressed to give any real examples.
 
Last edited by a moderator:
Exophase said:
Your argument has two flaws. One, recompiled code takes time to load from disk or flash or whatever that could very well exceed the time it'd take to dynamically recompile it. Two, performance measurements should be asymptotic, which completely weeds out any startup costs. So it might start faster (or not..) but it wouldn't "run" faster, which I guarantee you is what the user cares more about.

You go on about analysis and improvements but it's all just vague. It's easy to talk about something being better when you're not pressed to give any real examples.
I'm pretty sure the code that needs to be recompiled has to load too. Well, you're the dynarec expert here, so I can't argue with the second flaw.

Alright, fair enough, I can't give you an example of faster statically recompiled code, because I haven't written a static recompiler. We have no way of knowing if it could or couldn't be faster. Regardless, it is still viable in semi-manual form.

I concede. Dynamic recompilation is the better method all around, as of right now.

Thanks everyone; it's been fun, except for that halting problem part. Now the next time someone mentions this you can point to this thread :)
 
Last edited by a moderator:
I just wanted to add (without going into details), that a lot of speed in my static recompiler comes from the fact that it's targeted at a single executable.
If I wrote a dynamic recompiler targeting a single executable, then it would be slower than the static recompiler, but still much faster than a dynamic recompiler targeting a lot of executables. The advantage would be less manual input than the static recompiler.
 
GuchaRU said:
The question is whether or not it will be possible to run regular linux apps on pandora. I know both pc and pandora can run linux, but what about linux apps?
I do not ask about games or apps that requires high performance, I ask in general.
Or the only one way to run linux app on pandora is to specifically compile it to ARM version of linux?

Sorry if it was answered before but I can't find it.
WizardStan said:
-Tj- said:
:blink: Eh? Wai-wait... how easy is it? Like for a beginner who's never once compiled an app? If I could learn how to do it, I think I'd gladly do it quite often.
The vast majority of stuff I've had to install from source have been
1) download source (as a tar.gz file)
2) uncompress tar.gz file to a temporary directory
3) start a terminal, and "cd" to that temporary directory
4) type "make configure"
5) type "make install" (may need root privileges for this)
6) start the program by typing its command, and/or adding a shortcut to the menu or desktop
An absolute beginner would need a step-by-step guide (how exactly to start a terminal, or uncompress a tar.gz file, for example) but if you've got at least basic understanding, it should be pretty simple.
This assumes that you have all the right packages already installed. Making sure that you have everything you need first is what will cause the greatest challenge.
Aninhumer said:
Poem58, there are two different conversations going on here unfortunately. <_<
A lot of devs have hijacked this topic which was asking a simple question about regular compiling of source code (which is easy) to talk about recompiling binaries (which isn't). Guys go make a topic in the dev board or something, and stop confusing people! ;)

If you find an app you want to run on the pandora, but it doesn't work, usually it will fail at the configure stage with an error, and you might be able to guess what package is needed from the output.

If you think the app would be useful/desirable/cool for other people, mention it on the forums and someone can probably package up a PND file and put it on the archive.


Overall, although it's likely some apps will compile fine, it's probably better for a less experienced user to wait for a properly packaged version, as what might be hours of confusion for a newbie could be a few minutes for someone else, who doesn't mind doing it for the community. :)

Aninhumer. Yeah, I noticed the topic change, it went from the orginal post asking if you needed to compile x86 to arm which as you see was answered and then the second quote above by WizardStan gave the steps. However, my question was about those steps. I appreciate that you responded however, asking someone else to do it was kinda against the point. Some of us would like to learn how to do it and my questions were asked to determine how likely it is someone like myself would be able to do this. The idea of many people learning to compile for the Pandora so as to increase the software base seems like a good one to me. I don't like it when people say "ask and someone will package it for you" cause no they probably won't lets assume 50% of the Pandora buyers don't know how to compile yet. Who is going to take the time necessary to compile 2000 requests!? However if 2000 people learn to do it themselves, suddenly you have the possibility of 2000 programs being available for everyone (I know not everyone will upload and there will be people with the same software, but you get my point)
I'd rather do it myself and share it out if possible, than spend weeks asking someone to do it for me.

So if someone could answer my two questions, then I'll know if it's going to be likely that I can be a contributor and avoid being a beggar. LOL

I don't need an entire manual or anything, I'm sure someone will post a guide, but I was curious if this really was going to be something I could do, I just don't fully understand yet the process that was outlined by WizardStan when it comes to "having the right packages installed" to compile. How does one figure that out when something doesn't compile, and once you compile it is it fully ready to go or does the downloader of that package have to "have the right packages installed"?
Keep in mind I haven never compiled anything, barely used linux (Ubuntu live CD for an hour) So saying it "fails at the configuration stage" (???) and "guess by the error" (How?) doesn't really help much. If someone could just give me a simple example of what the error might say? Does it just say, "compile failed, missing ABCD.XXX" or something?
Or is there some guide somewhere I could read that explains all this? Anything I can learn would be helpful.

Sorry Dev's to interrupt the Static Vs. Dynamic stuff I don't understand...lol. Just trying to learn to be .01% as good as you guys!
 
Last edited by a moderator:
Poem58 said:
I just don't fully understand yet the process that was outlined by WizardStan when it comes to "having the right packages installed" to compile.
I know it doesn't apply in all cases (or answer all your questions, unfortunately), but quite often the readme file that will most likely accompany any source code you download will tell you what else you need to have installed before you can compile it (or perhaps the place you get it from might have a note to this effect published there). For example, it might say that it needs "libsuchandsuch", so you should make sure you have that before trying to compile the code. You'll likely see these things referred to as "dependencies", since they're what the program you're trying to compile depends upon to work. :p
 
Last edited by a moderator:
Back
Top