ARSC HPC Users' Newsletter 364, June 22, 2007
Resolving Linker Errors - Part II
We're going to switch the order of this series and cover some ways to resolve runtime shared libraries errors. In a few weeks, we'll take a look at ways to handle name mangling issues.
On Linux systems the dynamic linker/loader is aware of shared libraries contained in directories specified by the /etc/ld.so.conf. When a systems administrator runs the ldconfig command the /etc/ld.so.cache is updated. This cache file contains shared library names and the full path to the file for fast lookup at runtime. Any shared libraries specified in this file will be found at runtime by the dynamic linker/loader. There are cases when directories may not be included in the /etc/ld.so.conf (e.g. when there are multiple versions of the same library on the system).
When a shared library cannot be resolved during runtime you might see an error like this:
pedro 1% ./gsl_example ./gsl_example: error while loading shared libraries: libgsl.so.0: ...
To get around errors like this there are a few options.
LD_LIBRARY_PATHenvironment variable prior to running an executable which uses shared libraries:
LD_LIBRARY_PATHspecifies a colon delimited set of search directories where an executable should look for unresolved shared libraries.
NOTE: the "ldd" command lists dynamic dependencies for an executable.e.g.
# Here's an example program which has unresolved shared libraries: pedro 2% ldd gsl_example libgsl.so.0 => not found libc.so.6 => /lib64/libc.so.6 (0x00002aaaaabc6000) /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000) libgslcblas.so.0 => not found # The libraries required are in /usr/local/lib: pedro 3% export LD_LIBRARY_PATH=/usr/local/lib # pedro 4% ldd gsl_example libgsl.so.0 => /usr/local/lib/libgsl.so.0 (0x00002aaaaabc6000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaaae99000) libgslcblas.so.0 => /usr/local/lib/libgslcblas.so.0 (0x00002aaaab0d0000) libm.so.6 => /lib64/libm.so.6 (0x00002aaaab1fd000) /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)
Here we see that when
LD_LIBRARY_PATHis set, all of the shared libraries are found.
LD_PRELOADenvironment variable prior to running an executable which uses shared libraries:
This environment variable is a white space delimited set of shared libraries to be loaded before all others.e.g.
# Here we use the same example as before: pedro 5% ldd gsl_example libgsl.so.0 => not found libgslcblas.so.0 => not found libc.so.6 => /lib64/libc.so.6 (0x00002aaaaabc6000) /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000) # Then selectively preload particular shared libraries: pedro 6% export LD_PRELOAD="/usr/local/lib/libgslcblas.so /usr/local/lib/libgsl.so" # As before we now see that all shared libraries are resolved: pedro 7% ldd gsl_example /usr/local/lib/libgslcblas.so (0x00002aaaaabc6000) /usr/local/lib/libgsl.so (0x00002aaaaacf3000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaaafc6000) libm.so.6 => /lib64/libm.so.6 (0x00002aaaab1fd000) /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)
Use the "-R" or "-rpath" options during linking.
When the "-R" linker option is used, the directories specified by this option are embedded into the header of the executable in the "RPATH" attributes in the Dynamic Section of the header. The benefit of this method is that it doesn't require any runtime environment variables to be set for dynamic libraries to be found.
It's fairly uncommon to run the linker directly during compilation. Most build systems I've seen use the compiler to call the linker. If you are using the Portland Group Compilers, GNU Compiler Suite or PathScale compilers a linker option can be specified using the "-Wl,<linker_flag>" flag. Here's an example which uses "-Wl,-R" to set the "RPATH".
# compile the code to build an object file pedro 8% gcc gsl_example.c -I/usr/local/pkg/gsl/gsl-1.8/include -c # run the linker with the "-Wl,-R" option. pedro 9% gcc gsl_example.o -L/usr/local/pkg/gsl/gsl-1.8/lib \ -lgsl -lgslcblas \ -Wl,-R /usr/local/pkg/gsl/gsl-1.8/lib \ -o gsl_example # unset the variables we used previously pedro 10% unset LD_LIBRARY_PATH pedro 11% unset LD_PRELOAD # We now see that the shared libraries are resolved without using # either of the environment variables. pedro 12% ldd gsl_example libgsl.so.0 => /usr/local/pkg/gsl/gsl-1.8/lib/libgsl.so.0 (0x00002aaaaabc6000) libgslcblas.so.0 => /usr/local/pkg/gsl/gsl-1.8/lib/libgslcblas.so.0 (0x00002aaaaae6e000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaaafc6000) libm.so.6 => /lib64/libm.so.6 (0x00002aaaab1fd000) /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)
If you would like to see the "RPATH" settings in an executable, you can use the objdump command to display the "RPATH" settings for an executable. The "-p" option to objdump displays the private headers (which happen to contain the Dynamic Section):
pedro 13% objdump -p gsl_example less gsl_example: file format elf64-x86-64 Program Header: ... ... Dynamic Section: NEEDED libgsl.so.0 NEEDED libgslcblas.so.0 NEEDED libc.so.6 RPATH /usr/local/pkg/gsl/gsl-1.8/lib ... ...
Even if you've never seen the -R linker option, chances are that you've used if you've ever used MPICH or MVAPICH on a Linux system. Both of these MPI implementations use the RPATH options to embed the location of MPI libraries into the executable: Here's an example from Nelchina:
# Notice the -rpath in the output of "mpicc -show" c613n6:~> mpicc -show pgcc -DUSE_STDARG -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 \ -DHAVE_UNISTD_H=1 -DHAVE_STDARG_H=1 -DUSE_STDARG=1 \ -DMALLOC_RET_VOID=1 \ -Wl,-rpath -Wl,/usr/mpich/mpich-1.2.6-pgi611/lib/shared \ -L/usr/mpich/mpich-1.2.6-pgi611/lib/shared \ -L/usr/mpich/mpich-1.2.6-pgi611/lib -lmpich \ -L/work/usr/local/lib64 -lrapl -lpthread
If you use more than one of these methods, here are some things to keep in mind. The
environment variable has precedence over RPATH settings, so you can override RPATH settings in an executable by explicitly specifying libraries to preload. The
environment variable also has precedence over the
In cases where the directories specified by the RPATH attribute do not exist during runtime, the executable will use directories specified in the
and /etc/ld.so.conf to look for shared libraries.
For more information on this topic see "man ld.so" on a Linux system.
The ARSC Summer Science Seminar Series Continues
A free seminar introducing the features and capabilities of Google Earth to the general public will be presented Tuesday, June 26, at 1 p.m. in room 010 of the West Ridge Research Building by John Bailey, a postdoctoral fellow with the Arctic Region Supercomputing Center. Bailey will focus on demonstrating interesting real-world uses of Google Earth, which has helped bring capabilities for geographic information systems (GIS) to mainstream computer users.
The Google Earth markup language, KML, has emerged as a standard for creating visual, navigable environments for geospatial navigation. Attendees will see examples of state-of-the-art use of Google Earth, and learn about how to make their own data appear in Google Earth.-- Title: How Supercomputing is Revolutionizing Our View of the Earth's Interior When: Tuesday, July 10th from 1-2 p.m Where: 010 West Ridge Research Building (WRRB) Seminar Description:
Michael Thorne, postdoctoral fellow with the Arctic Region Supercomputing Center at UAF, will give a presentation on "How Supercomputing is Revolutionizing Our View of the Earth's Interior," Tuesday, July 10 at 1 p.m., in room 010 of the West Ridge Research Building as part of an ongoing series to demonstrate how computer and information-based technologies are applied to solving real world problems.
According to Thorne, the propagation of seismic waves as a result of earthquakes provides a direct method of probing the Earth's interior structure. The most detailed picture of this structure is obtained by modeling the seismic waveforms. Because of computational limitations, most modeling efforts have been confined to 1-D, layer cake models. Yet, the recent widespread availability of large-scale computing has made it possible to model waveforms for complex 2- or 3-D whole earth models.
Thorne's presentation will focus on current knowledge of the structure of earth's deep interior and how supercomputing is being used to sharpen the view. In addition, he will show what some of these newer high-resolution studies are telling scientists about features in earth's lower mantle, or Core-Mantle Boundary region, and how they relate to the dynamic processes that shape the surface.
Quick-Tip Q & A
A: [[ I share a directory with my group. It contains files which are [[ all in the same Unix group, but with several different owners [[ (including my girlfriend AND ex-girlfriend!). [[ [[ Now I need to copy the entire directory to a new host, but [[ "scp -rp" changes the ownership of all the files to **ME** and [[ so does tar/untar. Is there any way to preserve file ownership [[ across this move!? [[ [[ Everyone's mad at me! [[ # # Thanks to Lorin Hochstein for sharing this solution. # The rsync program can preserve file ownerships with the "-o" flag, but you can only use that flag from the root account. # # Greg Newby and Brad Chamberlain provided solutions using tar. # Probably, "tar" is the right answer (at least, GNU's tar). But you'll need to be superuser on the system you are sending the files *to*, and that system will need to have the same UIDs as on the sending system. The default for tar is to store information about the user and group in the tar file ("man tar", "info tar" or "tar --help" for more details). So, creating the tar file with the multiple directories is easy, and does what you want: it stores information about the owning user for each file & directory. The problem is that when you untar (e.g. "tar xf" or somesuch), as a regular user you do not have the privileges to create files & directories with the ownership that was specified in the tar file. So, the result is that everything is created by you. If you are superuser, though, then the untar will recreate the files with the same ownerships based on UIDs. (There are some tar options concerning UIDs versus usernames, ditto for GIDs.) The result is that the UID of the untarred files will match that of the input files, from the other system. This is not at all useful if the usernames or UIDs (depending on which tar option you use) don't match on both systems. Of course, if you are superuser on the other system, you can also do a "chown" to change ownership after the fact. In that case, as long as the file ownership patterns are not too diverse, it might not matter about UID matching across systems. Finally, in case this isn't obvious: only the superuser can create files owned by other users. There aren't any practical ways of creating files owned by other users that doesn't require some sort of privileged operation. # # An anonymous reader provided this bit of advice. # Talk to your system admin with root permissions. Also, consider branching out and stop dating the girls in your CS class! Q: I have a binary formatted file with floating point numbers in it, but I can't remember if I compiled with REAL*4 or REAL*8 when I created the output. I know I should really be using NetCDF or HDF for output files, but I didn't know about those file formats when I created the file! Is there a simple way for me to see the values in the file without writing a new program? There ought to be a way to do this on the command line!
[[ Answers, Questions, and Tips Graciously Accepted ]]
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669 Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678 Arctic Region Supercomputing Center University of Alaska Fairbanks PO Box 756020 Fairbanks AK 99775-6020
Subscribe to (or unsubscribe from) the e-mail edition of the
ARSC HPC Users' Newsletter.
Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.