Segfault With Optimization


HEP

Member
Joined
Dec 25, 2006
Messages
188
If I set optimization to "Default" everything runs fine, but once I compile with "Optimize for speed" I get segfaults. Under what circumstances can this happen? Could it be that the compiler ignores "__attribute__ ((packed))" with this setting? Or are there other error sources?
 
A segfault is generally trying to read/write code/data from/to an invalid address in memory (or trying to write to a read only memory location).

If it only happens when you enable optimisation, best thing to do is change the compiler version - try a later version and an older version. I've not had any problems with gcc 4.0.2 for example. 3.4 is also another good version to try.
 
It is also very likely that optimization has uncovered a pre-existing bug. There will be less margin for error with optimization turned on so any out-if-bounds array accesses are more likely to cause segfaults rather than just returning uninitialized gibberish.
My advice is to try and find where is is segfaulting by putting printfs near where you think it might be going wrong and using a script to redirect the output to a file.
Here's the script that I use standardly:
The file is called launch.gpe, also there is no need to have your program re-launch the menu program if you use this script which is nice for debugging on your dev platform if you can.
Code:
#!/bin/sh

./program.gpe &> output

# return to the menu screen
cd /usr/gp2x
sync
exec /usr/gp2x/gp2xmenu

C
 
I had a similar problem where a program compiled with -O1 or -O2 optimization on gcc 4.0.2 would randomly assign a certain local variable to 0. No stack corruption or anything, even after an extensive search. Now that I have gcc 3.4.6 running, I should try compiling with that.
 
If you have a working usb cable, then you can always telnet into the gp2x and run the application directly rather than redirecting to a file. Saves wear and tear on the SD card, and makes it a lot easier.

Also, don't forget you can use GDB, which is a full debugger. It can tell exactly where your program falls over. The output isn't that good on heavily optimised pieces of code, but it can still give you clues and let you examine variables, set breakpoints/etc to track down the problem.
 
Out of curiosity, how do you run it directly without writing to SD? I got nfs working, but it is unbearably slow... and I haven't been able to get a samba client working yet. I've just been transferring stuff with FTP and running through telnet.
 
Thanks for your input! I know how to use the debugger and telnet, but this doesn't make it much easier. The problem is that I get a wrong pointer. The program will crash when the pointer is used, but I must know when it gets the false value. Normally I can trace this by single-stepping and watching the variables, but with optimized code I only get "value optimized out" for certain variables. But the strange thing is that the pointer is valid without optimization and I really have no idea why the optimization could screw everything up. Mudi's experience could be an explanation, but if this is true, what could be the reason of this? A bug in the gcc compiler?
 
As others said, usually a crash at higher level of optimizations uncovers a bug in the program itself rather than in the compiler.

You could try to run your program on a Linux x86 PC with valgrind.
 
Yup, it could be a bug in the compiler, and the only way of finding this is going through the created assembler and find the fault yourself, and then altering your code until it produces decent assembler. The easiest way is to just try another version of GCC.

It could be that the memory is different under different optimisation levels, and so everything fine with no optimisation, but with it on high, it crashes due to a unaligned access.
 
Maybe I found something. Take a look at this, please:
Code:
// Line 159 in r_things.c
sprites = Z_Malloc(numsprites *sizeof(*sprites), PU_STATIC, NULL);
And now the debugger output:
Code:
Breakpoint 2, Z_Malloc (size=2456, tag=1, user=0x0) at .\z_zone.c:165
165	 .\z_zone.c: No such file or directory.
		in .\z_zone.c
(gdb) bt
#0  Z_Malloc (size=2456, tag=1, user=0x0) at .\z_zone.c:165
#1  0x0006d9ac in R_InitSpriteDefs (namelist=0x15d3b0) at .\r_things.c:159
#2  0x00056f70 in $a () at .\p_setup.c:1588
#3  0x000159d0 in H2_Main () at .\h2_main.c:234
#4  0x00018424 in main (argc=<value optimized out>,
	argv=<value optimized out>) at .\i_linux.c:1704
(gdb) finish
Run till exit from #0  Z_Malloc (size=2456, tag=1, user=0x0) at .\z_zone.c:165
R_InitSpriteDefs (namelist=0x15d3b0) at .\r_things.c:167
167	 .\r_things.c: No such file or directory.
		in .\r_things.c
Value returned is $2 = (void *) 0x405c7310
(gdb) p sprites
$3 = (spritedef_t *) 0x0
 
I found the bug. You were right that it was in my program, but I couldn't find it, because it seems that the debugger does not work with optimized code. It was giving wrong line numbers and wrong data, which let me think it was some mysterious compiler bug. Anyway thanks again for your suggestions!

PS: One last question: What is faster: "Optimize for speed" or "Maximum optimization"?
 
Again, how do you have yours set up to run directly off your computer Squidge? I haven't been able to get nfs or samba working at a decent speed.
 
I got samba working by just using the version of samba in the archive. It's not a brilliant speed, but it's comparable to writing to the sd card via samba, and my executables are always < 1MB, so it doesn't bother me much.
 
I got samba working and telnetd working, but I had to do the module fix thing (i think its in the wiki for usb networking).

Now I can compile to the shared smb drive and run it via telnet. W00t.
 
Aha, got samba working at a decent speed finally :D

The hardest part was getting it configured on my desktop side, Kubuntu Edgy's configuration applet is buggy... well, that's what editing raw configuration files are for, amirite? :D
 
Back
Top