Question about static and dynamic linking regarding symbol versioning


Djoga'Ro

moonstruck
Joined
Apr 3, 2016
Messages
2,490
(my question doesn't regard any handhelds nor ARM; am I still in the correct sub-forum?)

In a project at work I want to facilitate Fast-DDS linked statically to be able to communicate with ROS2 nodes on a remote device.
To learn how to use that library properly to that effect in isolation, I made some project ddstest.
I compiled Fast-DDS and its direct dependencies with flags to make it static (all of them came with cmake stuff prepared so that's what I used for that). Some of them have still other dependencies - openssl is the one I'm aware of.
Then in Code::Blocks in the linker options for ddstest I explicitely gave those four lib*.a plus /usr/lib/librt.a.
Building on Artix, when I tried to run the fruits of my labour in Ubuntu ff on a VM, where there are also some ROS2 tutorial nodes running, I learned about there being different versions of glibc out in the wild.

So I found this: https://stackoverflow.com/a/26241702
, made such a file as
Code:
linker_symver.h,
, did
Code:
export CFLAGS="-include ~/Documents/3rdParty/helpers/linker_symver.h"
before reiterating that cmake stuff, and added that include flag in my project's options. The g++-invokations C::B then comes up with look like this:
Code:
g++ -pedantic -Wextra -Wall -std=c++17 -m64 -fexceptions -include ~/Documents/3rdParty/helpers/linker_symver.h -O2 -I../../3rdParty/Fast-DDS/install/include -I/usr/bin/include -c /mnt/int/data/micha/Documents/Projekte/ddstest/main.cpp -o obj/defaultRelease/main.o
g++ -pedantic -Wextra -Wall -std=c++17 -m64 -fexceptions -include ~/Documents/3rdParty/helpers/linker_symver.h -O2 -I../../3rdParty/Fast-DDS/install/include -I/usr/bin/include -c /mnt/int/data/micha/Documents/Projekte/ddstest/TestSubscriber.cpp -o obj/defaultRelease/TestSubscriber.o
g++ -L../../3rdParty/Fast-DDS/install/lib -o bin/defaultRelease/ddstest obj/defaultRelease/main.o obj/defaultRelease/TestSubscriber.o  -m64 -Bshared /usr/lib/libpthread.so /usr/lib/libdl.so /usr/lib/libssl.so /usr/lib/libcrypto.so /usr/lib/librt.so -Bstatic -s  ../../3rdParty/Fast-DDS/install/lib/libfastrtps.a ../../3rdParty/Fast-DDS/install/lib/libfastcdr.a ../../3rdParty/Fast-DDS/install/lib/libfoonathan_memory-0.6.2.a ../../3rdParty/Fast-DDS/install/lib/libtinyxml2.a

Still, there are glibc dependencies which ask for version 2.33 (I'm trying to get it down to 2.30 for now) in my binary ddstest. Like that:
Code:
> readelf -aC ./ddstest | grep GLIBC_2.33
00000067db88  017a00000001 R_X86_64_64       0000000000000000 stat64@GLIBC_2.33 + 0
00000067dba0  00ed00000001 R_X86_64_64       0000000000000000 fstat64@GLIBC_2.33 + 0
00000067ddb0  00df00000001 R_X86_64_64       0000000000000000 lstat64@GLIBC_2.33 + 0
00000067a898  013a00000007 R_X86_64_JUMP_SLO 0000000000000000 fstat@GLIBC_2.33 + 0
   223: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND [...]@GLIBC_2.33 (36)
   237: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND [...]@GLIBC_2.33 (36)
   314: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND fstat@GLIBC_2.33 (36)
   378: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND [...]@GLIBC_2.33 (36)
  0dc:   3 (GLIBCXX_3.4)   9 (GLIBC_2.2.5)  12 (GLIBC_2.2.5)  24 (GLIBC_2.33)
  0ec:   5 (GLIBCXX_3.4.21)   24 (GLIBC_2.33)    3 (GLIBCXX_3.4)  21 (GLIBC_2.3.2)
  138:   9 (GLIBC_2.2.5)  21 (GLIBC_2.3.2)  24 (GLIBC_2.33)    c (CXXABI_1.3)
  178:   3 (GLIBCXX_3.4)   c (CXXABI_1.3)   24 (GLIBC_2.33)    5 (GLIBCXX_3.4.21)
  0x0080:   Name: GLIBC_2.33  Flags: none  Version: 36
I'm trying to find out, where those high version deps come in. And to narrow it down my question(s) here:

When I compile something into a static library, while not providing all of its dependencies as statics, can the result carry some dependency on certain function's versions from shared libs?
If so, how can I check? readelf or nm show me no such thing for the static libs I built (or I don't know the right buttons to push).

Or must those unwanted version deps come from the shared libs I involve in my test projest?
Or do those version choices all get decided, when I compile my stuff into a binary?

Pertinent clues would be very much appreciated.
 
Yes, as far as I know, readelf won't report on static libs as once they're static they're effectively part of your project.

I can't comment on much else in your report, as I don't know what Fast-DDS or ROS2 is.
 
I can't comment on much else in your report, as I don't know what Fast-DDS or ROS2 is.
I don't think it's relevant to the problem, but since you asked :)
DDS is the Data Distribution Service spec from the Object Management Group (I think; OMG they call themselves), Fast-DDS is the implementation by eProsima. It's some pub-sub comm protocol stack with type system, (de)serialisation and participants bearing the same domain id finding each other automagically. It's gotta be the real deal, the static lib is 26MB! A third of what the HDD in my first PC could contain.
Using that, since ROS2 builds upon DDS and it seems I cannot make my app talk ros node without it being a ros app run within the ros environment. ROS2 is the Robot Operating System in it's second incarnation. (It's not an OS though. It's a framework and a middleware - well the latter at least to the extent that it has an abstraction layer to go on top of a DDS implementation.)
 
Of those dynamic libs I have to link against libcrypto.so of OpenSSL is the only one with undefined symbols
Code:
[f|l|]stat(64)?
which are the symbols in my binary that are versioned GLIBC_2.33. So I build the openssl libs myself as statics, rebuild Fast-DDS with those, and then rebuild my testee. Didn't help.
To determine, whether the static libs do/can carry the pathogene I copied the 3rdParty directory with the static lib builts done on Artix and my project directory over to Ubuntu in the VM to build my project there.

Now g++ cries error, but it doesn't make any sense to me. It doesn't mention versions and complains about undefined references that should be in one of the shared libs given in the command line:
Code:
g++ -L../../3rdParty/Fast-DDS/install/lib -o bin/defaultRelease/ddstest \
obj/defaultRelease/main.o obj/defaultRelease/TestSubscriber.o  -m64 -Bshared \
/usr/lib/x86_64-linux-gnu/libpthread.so /usr/lib/x86_64-linux-gnu/libdl.so -Bstatic -s \
../../3rdParty/Fast-DDS/install/lib/libfastrtps.a \
../../3rdParty/Fast-DDS/install/lib/libfastcdr.a \
../../3rdParty/Fast-DDS/install/lib/libfoonathan_memory-0.6.2.a \
../../3rdParty/Fast-DDS/install/lib/libtinyxml2.a /usr/lib/x86_64-linux-gnu/librt.a \
../../3rdParty/Fast-DDS/install/lib/libssl.a ../../3rdParty/Fast-DDS/install/lib/libcrypto.a
/usr/bin/ld: ../../3rdParty/Fast-DDS/install/lib/libfastrtps.a(sqlite3.c.o): in function `unixDlError':
sqlite3.c:(.text+0x3c97f): undefined reference to `dlerror'
/usr/bin/ld: ../../3rdParty/Fast-DDS/install/lib/libfastrtps.a(sqlite3.c.o): in function `unixDlClose':
sqlite3.c:(.text+0x4c24): undefined reference to `dlclose'
/usr/bin/ld: ../../3rdParty/Fast-DDS/install/lib/libfastrtps.a(sqlite3.c.o): in function `unixDlSym':
sqlite3.c:(.text+0x4c37): undefined reference to `dlsym'
/usr/bin/ld: ../../3rdParty/Fast-DDS/install/lib/libfastrtps.a(sqlite3.c.o): in function `unixDlOpen':
sqlite3.c:(.text+0x4c49): undefined reference to `dlopen'
collect2: error: ld returned 1 exit status

Those functions are in libdl.so. In ubuntu:
Code:
/u/l/x86_64-linux-gnu readelf -sW ./libdl.so | grep -P "dl(error|open|close|sym)"
    28: 0000000000001440    77 FUNC    GLOBAL DEFAULT   17 dlclose@@GLIBC_2.2.5
    32: 00000000000014b0   193 FUNC    GLOBAL DEFAULT   17 dlsym@@GLIBC_2.2.5
    34: 0000000000001390   133 FUNC    GLOBAL DEFAULT   17 dlopen@@GLIBC_2.2.5
    39: 0000000000001840   674 FUNC    GLOBAL DEFAULT   17 dlerror@@GLIBC_2.2.5

(In Artix I find the same versions.)
And looking at the the lib that wants to use those:
Code:
~/Doc…/3rd…/Fas…/ins…/lib> readelf -aCW ./libfastrtps.a | grep -P "dl(close|error|open|sym)"
000000000003c97f  0000082b00000004 R_X86_64_PLT32         0000000000000000 dlerror - 4
0000000000004c24  000007c000000004 R_X86_64_PLT32         0000000000000000 dlclose - 4
0000000000004c37  000007c100000004 R_X86_64_PLT32         0000000000000000 dlsym - 4
0000000000004c49  000007c200000004 R_X86_64_PLT32         0000000000000000 dlopen - 4
  1984: 0000000000000000     0 NOTYPE  GLOBAL DEFAULT  UND dlclose
  1985: 0000000000000000     0 NOTYPE  GLOBAL DEFAULT  UND dlsym
  1986: 0000000000000000     0 NOTYPE  GLOBAL DEFAULT  UND dlopen
  2091: 0000000000000000     0 NOTYPE  GLOBAL DEFAULT  UND dlerror

Does anyone have an idea, why the linker fails?
 
I think the Ndx of UND means it's not included in the file, so it needs to be satisfied by some dynamic lib or other. I'm not a heavy user of readelf though, so I can't be sure, but it might explain why the linker fails.
 
Yes, the UND means undefined (within this unit/file/object). But those symbols are defined in the shared library libdl.so which is given to the linker with '/usr/lib/x86_64-linux-gnu/libdl.so' in the command line.

In the meantime I rebuild the 3rd-party lib in ubuntu, but I still get that linker error.
 
I just almost bit my desk.

Searching for answers I came across "the order of libs given in gcc's linkers command line matters" a lot. Which is why I meticulously compared the g++ invokations on Artix and Ubuntu, but they were the same. Still I got Code::Blocks to let me put libdl.so at the end - C::B fought me on this. And yeah, then it worked.

I then compared g++ versions
- 9.3.0 on Ubuntu
- 10.2.0 on Artix

I mean change for the better is good, but ... WHY ME?

*** off to ubuntu *** back from ubuntu ***

I just noticed, C::B had resolved the symlink that my project directory there was and I have worked in the shared directory (host <> vm) all the time. So compiling the 3rd stuff in ubuntu was an utter waste of time.

Nontheless, the static libs built on Artix I could use to build my project in Ubuntu and I could run the resulting binary. Thus the statics don't make my shit depending on the current (in Artix) version of glibc. It's gotta be happening when building my project in spite of my export CFLAGS="-include ~/Documents/3rdParty/helpers/linker_symver.h" attempt.

At least now I know better, where the problem lies.
 
On a side note, now I got to start ros2's turtlesim in the vm and then my stupid ddstest, which reported "Participant discovered". Finally, I can get back to working on the actual problem.
I expect to have a good night sleep tonight!

I gotta say though, I'm glad that at this job I got around developing on Windows with .Net, but developing natively on Linux is harder. I just hope, that steep learning curve pays off and I will have higher degrees of freedom (in the technical sense) in the end. That hope is all I'm running on for now.


The initial question still stands though. How to get the linker to not link to the freshest symbol versions on the build host? (preferably via explicit arguement, not by changing the build host virtually, physically, or logically.)
 
The initial question still stands though. How to get the linker to not link to the freshest symbol versions on the build host? (preferably via explicit arguement, not by changing the build host virtually, physically, or logically.)
I only followed the thread superficially, sorry, and I'm not sure I understand your question. Do you mean like with --sysroot , -B or -Wl,-L ?
 
Do you mean like with --sysroot , -B or -Wl,-L ?
I don't know [those, yet].
I mean, on the system I build my C++ project on, glibc is at version 2.33. So the binary I get will have references to e.g. stat64@GLIBC_2.33.
I'd like to let the linker (via g++ in my case) know for things GLIBC only use versions <=2.30. A way to make the linker version-cap, you could say. I mean there's gotta be a another way than letting the linker only see an older version of glibc. ?
 
Don't trust me on this because I don't know much what I'm talking, but I'd guess in order to link against glibc 2.30 you'd need to have a version 2.30 of glibc-dev somewhere (not necessarily the one the build system uses, but one the build system also has). And then point your buildchain to use that glibc instead of the system one. I'm not sure simply by having glibc 2.33 in your system that gives all the information about the symbols that were in each previous version.
I assume if you build against 2.30 you can use the resulting executable against 2.33 because the glibc developers intended that to be possible, so didn't change things so much that it would break programs compiled for 2.30. But if you build against 2.33 is like not very intuitive that should work wihth 2.30, because 2.30 developers didn't know what would be in 2.33 yet. I don't know glibc policy, or the linking inner workings, it's just my intuition.
 
Yeah, glibc instances are backwards compatible. So when your executable wants to asks for a symbol versioned 2.2.5, it'll find it in glibc 2.33. I guess they drop old shit every now end then, but there's at least a window back in time. An glibc instance 2.30 won't have symbols versioned 2.33 though.
So to me it'd be nothing but natural, that there was a linker option to give a version cap, when one want's to be more portable. I just couldn't find such a thing, yet.
 
You know about this more than I do, so I won't be able to help. I've just found someone with a maybe similar problem. There seems to be a .symver directive to put in your sources to require an older version of a function you call, but it's going to be a pain to use that for every libc call.
So they speak of building crosscompilers, etc. It looks like it's easier to just build in a chroot or use sysroot to use the oldest version you want to support. But I've never done anything like this so I can't advise.
 
Only the bits and pieces I've come across so far.
Such a .symvar directive I put in the dedicated linker_symver.h that I direct the compiler to include. Didn't help (fully?). I don't know, how or if this is supposed to manifest in object files or how I could check for that. I used that directive for each static build of the 3rdparty stuff and for building my project, but maybe it judt didn't take for reasons I don't understand.

For now I'll build in Ubuntu, whenever I need my stuff to run somewhere else than Artix.

I might later end up setting up some dedicated build environment. I just think that's unnecessary complexity, when something like a linker directive would do. And it's in my nature to do everything I can to try to avoid unnecessary complexity whenever identified, more often than not to my own detriment. My windmills, man, my windmills. :)
 
Back
Top