how to write shared libraries

Upload: diego-siqueira

Post on 19-Oct-2015

54 views

Category:

Documents


0 download

TRANSCRIPT

  • How To Write Shared Libraries

    Ulrich [email protected]

    December 10, 2011

    AbstractToday, shared libraries are ubiquitous. Developers use them for multiple reasons and createthem just as they would create application code. This is a problem, though, since on manyplatforms some additional techniques must be applied even to generate decent code. Even moreknowledge is needed to generate optimized code. This paper introduces the required rules andtechniques. In addition, it introduces the concept of ABI (Application Binary Interface) stabilityand shows how to manage it.

    1 Preface

    For a long time, programmers collected commonly usedcode in libraries so that code could be reused. This savesdevelopment time and reduces errors since reused codeonly has to be debugged once. With systems runningdozens or hundreds of processes at the same time reuseof the code at link-time solves only part of the problem.Many processes will use the same pieces of code whichthey import for libraries. With the memory managementsystems in modern operating systems it is also possibleto share the code at run-time. This is done by loading thecode into physical memory only once and reusing it inmultiple processes via virtual memory. Libraries of thiskind are called shared libraries.

    The concept is not very new. Operating system designersimplemented extensions to their system using the infras-tructure they used before. The extension to the OS couldbe done transparently for the user. But the parts the userdirectly has to deal with created initially problems.

    The main aspect is the binary format. This is the for-mat which is used to describe the application code. Longgone are the days that it was sufficient to provide a mem-ory dump. Multi-process systems need to identify differ-ent parts of the file containing the program such as thetext, data, and debug information parts. For this, binaryformats were introduced early on. Commonly used in theearly Unix-days were formats such as a.out or COFF.These binary formats were not designed with shared li-braries in mind and this clearly shows.

    Copyright c 2002-2010, 2011 Ulrich DrepperAll rights reserved. No redistribution allowed.

    1.1 A Little Bit of History

    The binary format used initially for Linux was an a.outvariant. When introducing shared libraries certain designdecisions had to be made to work in the limitations ofa.out. The main accepted limitation was that no reloca-tions are performed at the time of loading and afterward.The shared libraries have to exist in the form they areused at run-time on disk. This imposes a major restric-tion on the way shared libraries are built and used: everyshared library must have a fixed load address; otherwise itwould not be possible to generate shared libraries whichdo not have to be relocated.

    The fixed load addresses had to be assigned and this hasto happen without overlaps and conflicts and with somefuture safety by allowing growth of the shared library.It is therefore necessary to have a central authority forthe assignment of address ranges which in itself is a ma-jor problem. But it gets worse: given a Linux systemof today with many hundred of DSOs (Dynamic SharedObjects) the address space and the virtual memory avail-able to the application gets severely fragmented. Thiswould limit the size of memory blocks which can be dy-namically allocated which would create unsurmountableproblems for some applications. It would even have hap-pened by today that the assignment authority ran out ofaddress ranges to assign, at least on 32-bit machines.

    We still have not covered all the drawbacks of the a.outshared libraries. Since the applications using shared li-braries should not have to be relinked after changing ashared library it uses, the entry points, i.e., the functionand variable addresses, must not change. This can onlybe guaranteed if the entry points are kept separate fromthe actual code since otherwise limits on the size of afunction would be hard-coded. A table of function stubswhich call the actual implementation was the solutionused on Linux. The static linker got the address of each

    1

  • function stub from a special file (with the filename ex-tension .sa). At run-time a file ending in .so.X.Y.Zwas used and it had to correspond to the used .sa file.This in turn requires that an allocated entry in the stubtable always had to be used for the same function. Theallocation of the table had to be carefully taken care of.Introducing a new interface meant appending to the ta-ble. It was never possible to retire a table entry. To avoidusing an old shared library with a program linked with anewer version, some record had to be kept in the applica-tion: the X and Y parts of the name of the .so.X.Y.Zsuffix was recorded and the dynamic linker made sureminimum requirements were met.

    The benefits of the scheme are that the resulting programruns very fast. Calling a function in such a shared li-braries is very efficient even for the first call. It canbe implemented with only two absolute jumps: the firstfrom the user code to the stub, and the second from thestub to the actual code of the function. This is probablyfaster than any other shared library implementation, butits speed comes at too high a price:

    1. a central assignment of address ranges is needed;

    2. collisions are possible (likely) with catastrophic re-sults;

    3. the address space gets severely fragmented.

    For all these reasons and more, Linux converted early onto using ELF (Executable Linkage Format) as the binaryformat. The ELF format is defined by the generic spec-ification (gABI) to which processor-specific extensions(psABI) are added. As it turns out the amortized cost offunction calls is almost the same as for a.out but therestrictions are gone.

    1.2 The Move To ELF

    For programmers the main advantage of the switch toELF was that creating ELF shared libraries, or in ELF-speak DSOs, becomes very easy. The only difference be-tween generating an application and a DSO is in the finallink command line. One additional option (--shared inthe case of GNU ld) tells the linker to generate a DSOinstead of an application, the latter being the default. Infact, DSOs are little more than a special kind of binary;the difference is that they have no fixed load address andhence require the dynamic linker to actually become ex-ecutable. With Position Independent Executable (PIEs)the difference shrinks even more.

    This, together with the introduction of GNU Libtool whichwill be described later, has led to the wide adoption ofDSOs by programmers. Proper use of DSOs can helpsave large amounts of resources. But some rules must befollowed to get any benefits, and some more rules have to

    be followed to get optimal results. Explaining these ruleswill be the topic of a large portion of this paper.

    Not all uses of DSOs are for the purpose of saving re-sources. DSOs are today also often used as a way tostructure programs. Different parts of the program areput into separate DSOs. This can be a very powerful tool,especially in the development phase. Instead of relink-ing the entire program it is only necessary to relink theDSO(s) which changed. This is often much faster.

    Some projects decide to keep many separate DSOs evenin the deployment phase even though the DSOs are notreused in other programs. In many situations it is cer-tainly a useful thing to do: DSOs can be updated indi-vidually, reducing the amount of data which has to betransported. But the number of DSOs must be kept to areasonable level. Not all programs do this, though, andwe will see later on why this can be a problem.

    Before we can start discussing all this some understand-ing of ELF and its implementation is needed.

    1.3 How Is ELF Implemented?

    The handling of a statically linked application is verysimple. Such an application has a fixed load addresswhich the kernel knows. The load process consists sim-ply of making the binary available in the appropriate ad-dress space of a newly created process and transferringcontrol to the entry point of the application. Everythingelse was done by the static linker when creating the exe-cutable.

    Dynamically linked binaries, in contrast, are not com-plete when they are loaded from disk. It is thereforenot possible for the kernel to immediately transfer con-trol to the application. Instead some other helper pro-gram, which obviously has to be complete, is loaded aswell. This helper program is the dynamic linker. The taskof the dynamic linker is it, to complete the dynamicallylinked application by loading the DSOs it needs (the de-pendencies) and to perform the relocations. Then finallycontrol can be transferred to the program.

    This is not the last task for the dynamic linker in mostcases, though. ELF allows the relocations associated witha symbol to be delayed until the symbol is needed. Thislazy relocation scheme is optional, and optimizations dis-cussed below for relocations performed at startup imme-diately effect the lazy relocations as well. So we ignorein the following everything after the startup is finished.

    1.4 Startup: In The Kernel

    Starting execution of a program begins in the kernel, nor-mally in the execve system call. The currently executedcode is replaced with a new program. This means the ad-

    2 Version 4.1.2 How To Write Shared Libraries

  • typedef struct typedef struct{ {Elf32 Word p type; Elf64 Word p type;Elf32 Off p offset; Elf64 Word p flags;Elf32 Addr p vaddr; Elf64 Off p offset;Elf32 Addr p paddr; Elf64 Addr p vaddr;Elf32 Word p filesz; Elf64 Addr p paddr;Elf32 Word p memsz; Elf64 Xword p filesz;Elf32 Word p flags; Elf64 Xword p memsz;Elf32 Word p align; Elf64 Xword p align;

    } Elf32 Phdr; } Elf64 Phdr;

    Figure 1: ELF Program Header C Data Structure

    dress space content is replaced by the content of the filecontaining the program. This does not happen by sim-ply mapping (using mmap) the content of the file. ELFfiles are structured and there are normally at least threedifferent kinds of regions in the file:

    Code which is executed; this region is normally notwritable;

    Data which is modified; this region is normally notexecutable;

    Data which is not used at run-time; since not neededit should not be loaded at startup.

    Modern operating systems and processors can protect mem-ory regions to allow and disallow reading, writing, andexecuting separately for each page of memory.1 It ispreferable to mark as many pages as possible not writablesince this means that the pages can be shared betweenprocesses which use the same application or DSO thepage is from. Write protection also helps to detect andprevent unintentional or malignant modifications of dataor even code.

    For the kernel to find the different regions, or segmentsin ELF-speak, and their access permissions, the ELF fileformat defines a table which contains just this informa-tion, among other things. The ELF Program Header ta-ble, as it is called, must be present in every executableand DSO. It is represented by the C types Elf32 Phdrand Elf64 Phdr which are defined as can be seen in fig-ure 1.

    To locate the program header data structure another datastructure is needed, the ELF Header. The ELF header isthe only data structure which has a fixed place in the file,starting at offset zero. Its C data structure can be seenin figure 2. The e phoff field specifies where, countingfrom the beginning of the file, the program header tablestarts. The e phnum field contains the number of entriesin the program header table and the e phentsize field

    1A memory page is the smallest entity the memory subsystem ofthe OS operates on. The size of a page can vary between differentarchitectures and even within systems using the same architecture.

    contains the size of each entry. This last value is usefulonly as a run-time consistency check for the binary.

    The different segments are represented by the programheader entries with the PT LOAD value in the p type field.The p offset and p filesz fields specify where in thefile the segment starts and how long it is. The p vaddrand p memsz fields specify where the segment is locatedin the the process virtual address space and how large thememory region is. The value of the p vaddr field itselfis not necessarily required to be the final load address.DSOs can be loaded at arbitrary addresses in the virtualaddress space. But the relative position of the segmentsis important. For pre-linked DSOs the actual value of thep vaddr field is meaningful: it specifies the address forwhich the DSO was pre-linked. But even this does notmean the dynamic linker cannot ignore this informationif necessary.

    The size in the file can be smaller than the address spaceit takes up in memory. The first p filesz bytes of thememory region are initialized from the data of the seg-ment in the file, the difference is initialized with zero.This can be used to handle BSS sections2, sections foruninitialized variables which are according to the C stan-dard initialized with zero. Handling uninitialized vari-ables this way has the advantage that the file size can bereduced since no initialization value has to be stored, nodata has to be copied from disc to memory, and the mem-ory provided by the OS via the mmap interface is alreadyinitialized with zero.

    The p flags finally tells the kernel what permissions touse for the memory pages. This field is a bitmap with thebits given in the following table being defined. The flagsare directly mapped to the flags mmap understands.

    2A BSS section contains only NUL bytes. Therefore they do nothave to be represented in the file on the storage medium. The loaderjust has to know the size so that it can allocate memory large enoughand fill it with NUL

    Ulrich Drepper Version 4.1.2 3

  • typedef struct typedef struct{ {unsigned char e ident[EI NIDENT]; unsigned char e ident[EI NIDENT];Elf32 Half e type; Elf64 Half e type;Elf32 Half e machine; Elf64 Half e machine;Elf32 Word e version; Elf64 Word e version;Elf32 Addr e entry; Elf64 Addr e entry;Elf32 Off e phoff; Elf64 Off e phoff;Elf32 Off e shoff; Elf64 Off e shoff;Elf32 Word e flags; Elf64 Word e flags;Elf32 Half e ehsize; Elf64 Half e ehsize;Elf32 Half e phentsize; Elf64 Half e phentsize;Elf32 Half e phnum; Elf64 Half e phnum;Elf32 Half e shentsize; Elf64 Half e shentsize;Elf32 Half e shnum; Elf64 Half e shnum;Elf32 Half e shstrndx; Elf64 Half e shstrndx;

    } Elf32 Ehdr; } Elf64 Ehdr;

    Figure 2: ELF Header C Data Structure

    p flags Value mmap flag DescriptionPF X 1 PROT EXEC Execute PermissionPF W 2 PROT WRITE Write PermissionPF R 4 PROT READ Read Permission

    After mapping all the PT LOAD segments using the ap-propriate permissions and the specified address, or afterfreely allocating an address for dynamic objects whichhave no fixed load address, the next phase can start. Thevirtual address space of the dynamically linked executableitself is set up. But the binary is not complete. The kernelhas to get the dynamic linker to do the rest and for thisthe dynamic linker has to be loaded in the same way asthe executable itself (i.e., look for the loadable segmentsin the program header). The difference is that the dy-namic linker itself must be complete and should be freelyrelocatable.

    Which binary implements the dynamic linker is not hard-coded in the kernel. Instead the program header of theapplication contains an entry with the tag PT INTERP.The p offset field of this entry contains the offset ofa NUL-terminated string which specifies the file name ofthis file. The only requirement on the named file is thatits load address does not conflict with the load address ofany possible executable it might be used with. In gen-eral this means that the dynamic linker has no fixed loadaddress and can be loaded anywhere; this is just what dy-namic binaries allow.

    Once the dynamic linker has also been mapped into thememory of the to-be-started process we can start the dy-namic linker. Note it is not the entry point of the applica-tion to which control is transfered to. Only the dynamiclinker is ready to run. Instead of calling the dynamiclinker right away, one more step is performed. The dy-namic linker somehow has to be told where the applica-tion can be found and where control has to be transferred

    to once the application is complete. For this a structuredway exists. The kernel puts an array of tag-value pairs onthe stack of the new process. This auxiliary vector con-tains beside the two aforementioned values several morevalues which allow the dynamic linker to avoid severalsystem calls. The elf.h header file defines a number ofconstants with a AT prefix. These are the tags for theentries in the auxiliary vector.

    After setting up the auxiliary vector the kernel is finallyready to transfer control to the dynamic linker in usermode. The entry point is defined in e entry field of theELF header of the dynamic linker.

    1.5 Startup in the Dynamic Linker

    The second phase of the program startup happens in thedynamic linker. Its tasks include:

    Determine and load dependencies;

    Relocate the application and all dependencies;

    Initialize the application and dependencies in thecorrect order.

    In the following we will discuss in more detail only therelocation handling. For the other two points the wayfor better performance is clear: have fewer dependen-cies. Each participating object is initialized exactly oncebut some topological sorting has to happen. The identifyand load process also scales with the number dependen-cies; in most (all?) implementations this does not scalelinearly.

    4 Version 4.1.2 How To Write Shared Libraries

  • The relocation process is normally3 the most expensivepart of the dynamic linkers work. It is a process which isasymptotically at least O(R+nr) where R is the numberof relative relocations, r is the number of named reloca-tions, and n is the number of participating DSOs (plusthe main executable). Deficiencies in the ELF hash ta-ble function and various ELF extensions modifying thesymbol lookup functionality may well increase the factorto O(R + rn log s) where s is the number of symbols.This should make clear that for improved performance itis significant to reduce the number if relocations and sym-bols as much as possible. After explaining the relocationprocess we will do some estimates for actual numbers.

    1.5.1 The Relocation Process

    Relocation in this context means adjusting the applicationand the DSOs, which are loaded as the dependencies, totheir own and all other load addresses. There are twokinds of dependencies:

    Dependencies to locations which are known to bein the own object. These are not associated with aspecific symbol since the linker knows the relativeposition of the location in the object.

    Note that applications do not have relative reloca-tions since the load address of the code is knownat link-time and therefore the static linker is able toperform the relocation.

    Dependencies based on symbols. The reference ofthe definition is generally, but not necessarily, in adifferent object than the definition.

    The implementation of relative relocations is easy. Thelinker can compute the offset of the target destination inthe object file at link-time. To this value the dynamiclinker only has to add the load address of the object andstore the result in the place indicated by the relocation. Atruntime the dynamic linker has to spend only a very smalland constant amount of time which does not increase ifmore DSOs are used.

    The relocation based on a symbol is much more compli-cated. The ELF symbol resolution process was designedvery powerful so that it can handle many different prob-lems. All this powerful functionality adds to the com-plexity and run-time costs, though. Readers of the fol-lowing description might question the decisions whichled to this process. We cannot argue about this here; read-ers are referred to discussions of ELF. Fact is that symbolrelocation is a costly process and the more DSOs partic-ipate or the more symbols are defined in the DSOs, thelonger the symbol lookup takes.

    3We ignore the pre-linking support here which in many cases canreduce significantly or even eliminate the relocation costs.

    The result of any relocation will be stored somewherein the object with the reference. Ideally and generallythe target location is in the data segment. If code is in-correctly generated by the user, compiler, or linker relo-cations might modify text or read-only segments. Thedynamic linker will handle this correctly if the object ismarked, as required by the ELF specification, with theDF TEXTREL set in the DT FLAGS entry of the dynamicsection (or the existence of the DT TEXTREL flag in oldbinaries). But the result is that the modified page can-not be shared with other processes using the same object.The modification process itself is also quite slow sincethe kernel has to reorganize the memory handling datastructures quite a bit.

    1.5.2 Symbol Relocations

    The dynamic linker has to perform a relocation for allsymbols which are used at run-time and which are notknown at link-time to be defined in the same object as thereference. Due to the way code is generated on some ar-chitectures it is possible to delay the processing of somerelocations until the references in question are actuallyused. This is on many architectures the case for callsto functions. All other kinds of relocations always haveto be processed before the object can be used. We willignore the lazy relocation processing since this is just amethod to delay the work. It eventually has to be doneand so we will include it in our cost analysis. To actu-ally perform all the relocations before using the object isused by setting the environment variable LD BIND NOW toa non-empty value. Lazy relocation can be disabled foran individual object by adding the -z now option to thelinker command line. The linker will set the DF BIND NOWflag in the DT FLAGS entry of the dynamic section tomark the DSO. This setting cannot be undone withoutrelinking the DSOs or editing the binary, though, so thisoption should only be used if it is really wanted.

    The actual lookup process is repeated from start for eachsymbol relocation in each loaded object. Note that therecan be many references to the same symbol in differ-ent objects. The result of the lookup can be differentfor each of the objects so there can be no short cuts ex-cept for caching results for a symbol in each object incase more than one relocation references the same sym-bol. The lookup scope mentioned in the steps below is anordered list of a subset of the loaded objects which canbe different for each object itself. The way the scope iscomputed is quite complex and not really relevant here sowe refer the interested reader to the ELF specification andsection 1.5.4. Important is that the length of the scope isnormally directly dependent on the number of loaded ob-jects. This is another factor where reducing the numberof loaded objects is increasing performance.

    There are today two different methods for the lookup pro-cess for a symbol. The traditional ELF method proceedsin the following steps:

    Ulrich Drepper Version 4.1.2 5

  • Histogram for bucket list length in section [ 2] .hash (total of 1023 buckets):Addr: 0x42000114 Offset: 0x000114 Link to section: [ 3] .dynsymLength Number % of total Coverage

    0 132 12.9%1 310 30.3% 15.3%2 256 25.0% 40.6%3 172 16.8% 66.0%4 92 9.0% 84.2%5 46 4.5% 95.5%6 14 1.4% 99.7%7 1 0.1% 100.0%

    Average number of tests: successful lookup: 1.994080unsuccessful lookup: 1.981427

    Figure 3: Example Output for eu-readelf -I libc.so

    Histogram for bucket list length in section [ 2] .hash (total of 191 buckets):Addr: 0x00000114 Offset: 0x000114 Link to section: [ 3] .dynsymLength Number % of total Coverage

    0 103 53.9%1 71 37.2% 67.0%2 16 8.4% 97.2%3 1 0.5% 100.0%

    Average number of tests: successful lookup: 1.179245unsuccessful lookup: 0.554974

    Figure 4: Example Output for eu-readelf -I libnss files.so

    1. Determine the hash value for the symbol name.

    2. In the first/next object in the lookup scope:

    2.a Determine the hash bucket for the symbol us-ing the hash value and the hash table size inthe object.

    2.b Get the name offset of the symbol and usingit as the NUL-terminated name.

    2.c Compare the symbol name with the reloca-tion name.

    2.d If the names match, compare the version namesas well. This only has to happen if both, thereference and the definition, are versioned. Itrequires a string comparison, too. If the ver-sion name matches or no such comparisonis performed, we found the definition we arelooking for.

    2.e If the definition does not match, retry with thenext element in the chain for the hash bucket.

    2.f If the chain does not contain any further ele-ment there is no definition in the current ob-ject and we proceed with the next object inthe lookup scope.

    3. If there is no further object in the lookup scope thelookup failed.

    Note that there is no problem if the scope contains morethan one definition of the same symbol. The symbollookup algorithm simply picks up the first definition itfinds. Note that a definition in a DSO being weak has noeffects. Weak definitions only play a role in static linking.Having multiple definitions has some perhaps surprisingconsequences. Assume DSO A defines and referencesan interface and DSO B defines the same interface. Ifnow B precedes A in the scope, the reference in Awill be satisfied by the definition in B. It is said thatthe definition in B interposes the definition in A. Thisconcept is very powerful since it allows more special-ized implementation of an interface to be used withoutreplacing the general code. One example for this mech-anism is the use of the LD PRELOAD functionality of thedynamic linker where additional DSOs which were notpresent at link-time are introduced at run-time. But inter-position can also lead to severe problems in ill-designedcode. More in this in section 1.5.4.

    Looking at the algorithm it can be seen that the perfor-mance of each lookup depends, among other factors, onthe length of the hash chains and the number of objectsin the lookup scope. These are the two loops describedabove. The lengths of the hash chains depend on thenumber of symbols and the choice of the hash table size.Since the hash function used in the initial step of the algo-rithm must never change these are the only two remainingvariables. Many linkers do not put special emphasis on

    6 Version 4.1.2 How To Write Shared Libraries

  • selecting an appropriate table size. The GNU linker triesto optimize the hash table size for minimal lengths of thechains if it gets passed the -O option (note: the linker, notthe compiler, needs to get this option).

    A note on the current implementation of the hash table op-timization. The GNU binutils linker has a simple mindedheuristic which often favors small table sizes over shortchain length. For large projects this might very well in-crease the startup costs. The overall memory consumptionwill be sometimes significantly reduced which might com-pensate sooner or later but it is still advised to check theeffectiveness of the optimization. A new linker implemen-tation is going to be developed and it contains a better algo-rithm.

    To measure the effectiveness of the hashing two numbersare important:

    The average chain length for a successful lookup. The average chain length for an unsuccessful lookup.

    It might be surprising to talk about unsuccessful lookupshere but in fact they are the rule. Note that unsuccess-ful means only unsuccessful in the current objects. Onlyfor objects which implement almost everything they getlooked in for is the successful lookup number more im-portant. In this category there are basically only two ob-jects on a Linux system: the C library and the dynamiclinker itself.

    Some versions of the readelf program compute the valuedirectly and the output is similar to figures 3 and 4. Thedata in these examples shows us a number of things. Basedon the number of symbols (2027 versus 106) the chosentable size is radically different. For the smaller table thelinker can afford to waste 53.9% of the hash table en-tries which contain no data. Thats only 412 bytes ona gABI-compliant system. If the same amount of over-head would be allowed for the libc.so binary the tablewould be 4 kilobytes or more larger. That is a big dif-ference. The linker has a fixed cost function integratedwhich takes the table size into account.

    The increased relative table size means we have signifi-cantly shorter hash chains. This is especially true for theaverage chain length for an unsuccessful lookup. The av-erage for the small table is only 28% of that of the largetable.

    What these numbers should show is the effect of reduc-ing the number of symbols in the dynamic symbol ta-ble. With significantly fewer symbols the linker has amuch better chance to counter the effects of the subopti-mal hashing function.

    Another factor in the cost of the lookup algorithm is con-nected with the strings themselves. Simple string com-parison is used on the symbol names which are stored

    in a string table associated with the symbol table datastructures. Strings are stored in the C-format; they areterminated by a NUL byte and no initial length field isused. This means string comparisons has to proceed untila non-matching character is found or until the end of thestring. This approach is susceptible to long strings withcommon prefixes. Unfortunately this is not uncommon.

    namespace some_namespace {class some_longer_class_name {

    int member_variable;public:some_longer_class_name (int p);int the_getter_function (void);

    };}

    The name mangling scheme used by the GNU C++ com-piler before version 3.0 used a mangling scheme whichput the name of a class member first along with a descrip-tion of the parameter list and following it the other partsof the name such as namespaces and nested class names.The result is a name which distinguishable in the begin-ning if the member names are different. For the exampleabove the mangled names for the two members functionslook like this figure 5.

    In the new mangling scheme used in todays gcc versionsand all other compilers which are compatible with thecommon C++ ABI the names start with the namespacesand class names and end with the member names. Fig-ure 6 shows the result for the little example. The manglednames for the two member functions differs only after the43rd character. This is really bad performance-wise if thetwo symbols should fall into the same hash bucket.4

    Ada has similar problems. The standard Ada library forgcc has all symbols prefixed with ada , then the pack-age and sub-package names, followed by function name.Figure 7 shows a short excerpt of the list of symbols fromthe library. The first 23 character are the same for all thenames.

    The length of the strings in both mangling schemes isworrisome since each string has to be compared com-pletely when the symbol itself is searched for. The namesin the example are not extra ordinarily long either. Look-ing through the standard C++ library one can find manynames longer than 120 characters and even this is not thelongest. Other system libraries feature names longer than200 characters and complicated, well designed C++projects with many namespaces, templates, and nestedclasses can feature names with more than 1,000 charac-

    4Some people suggested Why not search from the back?. Thinkabout it, these are C strings, not PASCAL strings. We do not know thelength and therefore would have to read every single character of thestring to determine the length. The result would be worse.

    Ulrich Drepper Version 4.1.2 7

  • __Q214some_namespace22some_longer_class_nameithe_getter_function__Q214some_namespace22some_longer_class_name

    Figure 5: Mangled names using pre-gcc 3 scheme

    _ZN14some_namespace22some_longer_class_nameC1Ei_ZN14some_namespace22some_longer_class_name19the_getter_functionEv

    Figure 6: Mangled names using the common C++ ABI scheme

    ada__calendar__delays___elabbada__calendar__delays__timed_delay_ntada__calendar__delays__to_duration

    Figure 7: Names from the standard Ada library

    ters. One plus point for design, but minus 100 points forperformance.

    With the knowledge of the hashing function and the de-tails of the string lookup let us look at a real-world exam-ple: OpenOffice.org. The package contains 144 separateDSOs. During startup about 20,000 relocations are per-formed. Many of the relocations are performed as theresult of dlopen calls and therefore cannot be optimizedaway by using prelink [7]. The number of string compar-isons needed during the symbol resolution can be usedas a fair value for the startup overhead. We compute anapproximation of this value now.

    The average chain length for unsuccessful lookup in allDSOs of the OpenOffice.org 1.0 release on IA-32 is 1.1931.This means for each symbol lookup the dynamic linkerhas to perform on average 72 1.1931 = 85.9032 stringcomparisons. For 20,000 symbols the total is 1,718,064string comparisons. The average length of an exportedsymbol defined in the DSOs of OpenOffice.org is 54.13.Even if we are assuming that only 20% of the string issearched before finding a mismatch (which is an opti-mistic guess since every symbol name is compared com-pletely at least once to match itself) this would mean a to-tal of more then 18.5 million characters have to be loadedfrom memory and compared. No wonder that the startupis so slow, especially since we ignored other costs.

    To compute number of lookups the dynamic linker per-forms one can use the help of the dynamic linker. If theenvironment variable LD DEBUG is set to symbols oneonly has to count the number of lines which start withsymbol=. It is best to redirect the dynamic linkers out-put into a file with LD DEBUG OUTPUT. The number ofstring comparisons can then be estimate by multiplyingthe count with the average hash chain length. Since thecollected output contains the name of the file which is

    looked at it would even be possible to get more accurateresults by multiplying with the exact hash chain lengthfor the object.

    Changing any of the factors number of exported sym-bols, length of the symbol strings, number and lengthof common prefixes,number of DSOs, and hash tablesize optimization can reduce the costs dramatically. Ingeneral the percentage spent on relocations of the timethe dynamic linker uses during startup is around 50-70%if the binary is already in the file system cache, and about20-30% if the file has to be loaded from disk.5 It is there-fore worth spending time on these issues and in the re-mainder of the text we will introduce methods to do justthat. So far to remember: pass -O1 to the linker to gener-ate the final product.

    1.5.3 The GNU-style Hash Table

    All the optimizations proposed in the previous sectionstill leave symbol lookup as a significant factor. A lotof data has to be examined and loading all this data in theCPU cache is expensive. As mentioned above, the orig-inal ELF hash table handling has no more flexibility soany solution would have to replace it. This is what theGNU-style hash table handling does. It can peacefullycoexist with the old-style hash table handling by havingits own dynamic section entry (DT GNU HASH). Updateddynamic linkers will use the new hash table insted of theold, therefore provided completely transparent backwardcompatibility support. The new hash table implementa-tion, like the old, is self-contained in each executable andDSO so it is no problem to have binaries with the newand some with only the old format in the same process.

    The main cost for the lookup, especially for certain bina-

    5These numbers assume pre-linking is not used.

    8 Version 4.1.2 How To Write Shared Libraries

  • ries, is the comparison of the symbol names. If the num-ber of comparisons which actually have to be performedcan be reduced we can gain big. A second possible opti-mization is the layout of the data. The old-style hash tablewith its linked list of symbol table entries is not necessar-ily good to the CPU cache. CPU caches work particularlywell when the used memory locations are consecutive. Alinked list can jump wildly around and render CPU cacheloading and prefetching less effective.

    The GNU-style hash tables set out to solve these prob-lem and more. Since compatibility with existing runtimeenvironments could be maintained by providing the old-style hash tables in parallel no restrictions of the changeswere needed. The new lookup process is therefore slightlydifferent:

    1. Determine the hash value for the symbol name.

    2. In the first/next object in the lookup scope:

    2.a The hash value is used to determine whetheran entry with the given hash value is presentat all. This is done with a 2-bit Bloom fil-ter.6. If the filter indicates there is no suchdefinition the next object in the lookup scopeis searched.

    2.b Determine the hash bucket for the symbol us-ing the hash value and the hash table size inthe object. The value is a symbol index.

    2.c Get the entry from the chain array correspond-ing to the symbol index. Compare the valuewith the hash value of the symbol we are try-ing to locate. Ignore bit 0.

    2.d If the hash value matches, get the name off-set of the symbol and using it as the NUL-terminated name.

    2.e Compare the symbol name with the reloca-tion name.

    2.f If the names match, compare the version namesas well. This only has to happen if both, thereference and the definition, are versioned. Itrequires a string comparison, too. If the ver-sion name matches or no such comparisonis performed, we found the definition we arelooking for.

    2.g If the definition does not match and the valueloaded from the hash bucket does not havebit 0 set, continue with the next entry in thehash bucket array.

    2.h If bit 0 is set there are no further entry in thehash chain we proceed with the next object inthe lookup scope.

    3. If there is no further object in the lookup scope thelookup failed.

    6http://en.wikipedia.org/wiki/Bloom filter

    This new process seems more complicated. Not only isthis not really the case, it is also much faster. The numberof times we actually have to compare strings is reducedsignificantly. The Bloom filter alone usually filters out80% or more (in many cases 90+%) of all lookups. I.e.,even in the case the hash chains are long no work is donesince the Bloom filter helps to determine that there will beno match. This is done with one signal memory access.

    Second, comparing the hash value with that of the symboltable entry prevents yet more string comparisons. Eachhash chain can contain entries with different hash valueand this simple word comparison can filter out a lot ofduplicates. There are rarely two entries with the samehash value in a hash chain which means that an unsuc-cessful string comparison is rare. The probability for thisis also increased by using a different hash function thanthe original ELF specification dictates. The new functionis much better in spreading the values out over the valuerange of 32-bit values.

    The hash chain array is organized to have all entries forthe same hash bucket follow each other. There is nolinked list and therefore the cache utilization is much bet-ter.

    Only if the Bloom filter and the hash function test suc-ceed do we access the symbol table itself. All symbol ta-ble entries for a hash chain are consecutive, too, so in casewe need to access more than one entry the CPU cacheprefetching will help here, too.

    One last change over the old format is that the hash ta-ble only contains a few, necessary records for undefinedsymbol. The majority of undefined symbols do not haveto appear in the hash table. This in some cases signif-icantly reduces the possibility of hash collisions and itcertainly increases the Bloom filter success rate and re-duces the average hash chain length. The result are sig-nificant speed-ups of 50% or more in code which cannotdepend on pre-linking [7] (pre-linking is always faster).

    This does not mean, though, that the optimization tech-niques described in the previous section are irrelevant.They still should very much be applied. Using the newhash table implementation just means that not optimizingthe exported and referenced symbols will not have a bigeffect on performance as it used to have.

    The new hash table format was introduced in FedoraCore 6. The entire OS, with a few deliberate exceptions,is created without the compatibility hash table by using--hash-style=gnu. This means the binaries cannot beused on systems without support for the new hash table for-mat in the dynamic linker. Since this is never a goal for anyof the OS releases making this decision was a no-brainer.The result is that all binaries are smaller than with the sec-ond set of hash tables and in many cases even smaller thanbinaries using only the old format.

    Ulrich Drepper Version 4.1.2 9

  • Going back to the OpenOffice.org example, we can makesome estimates about the speedup. If the Bloom filter isable to filter out a low 80% of all lookups and the prob-ability of duplicates hash values is a high 15% we onlyhave to actually compare on average 72 0.2 0.15 1.1931 = 2.58 strings. This is an improvement of a fac-tor of 33. Adding to this to improved memory handlingand respect for the CPU cache we have even higher gains.In real world examples we can reduce the lookup costs sothat programs start up 50% faster or more.

    1.5.4 Lookup Scope

    The lookup scope has so far been described as an orderedlist of most loaded object. While this is correct it has alsobeen intentionally vague. It is now time to explain thelookup scope in more detail.

    The lookup scope consists in fact of up to three parts.The main part is the global lookup scope. It initiallyconsists of the executable itself and all its dependencies.The dependencies are added in breadth-first order. Thatmeans first the dependencies of the executable are addedin the order of their DT NEEDED entries in the executablesdynamic section. Then the dependencies of the first de-pendency are added in the same fashion. DSOs alreadyloaded are skipped; they do not appear more than onceon the list. The process continues recursively and it willstop at some point since there are only a limited numberof DSOs available. The exact number of DSOs loadedthis way can vary widely. Some executables depend ononly two DSOs, others on 200.

    If an executable has the DF SYMBOLIC flag set (see sec-tion 2.2.7) the object with the reference is added in frontof the global lookup scope. Note, only the object withthe reference itself is added in front, not its dependen-cies. The effects and reasons for this will be explainedlater.

    A more complicated modification of the lookup scopehappens when DSOs are loaded dynamic using dlopen.If a DSO is dynamically loaded it brings in its own setof dependencies which might have to be searched. Theseobjects, starting with the one which was requested in thedlopen call, are appended to the lookup scope if theobject with the reference is among those objects whichhave been loaded by dlopen. That means, those objectsare not added to the global lookup scope and they arenot searched for normal lookups. This third part of thelookup scope, we will call it local lookup scope, is there-fore dependent on the object which has the reference.

    The behavior of dlopen can be changed, though. If thefunction gets passed the RTLD GLOBAL flag, the loadedobject and all the dependencies are added to the globalscope. This is usually a very bad idea. The dynami-cally added objects can be removed and when this hap-pens the lookups of all other objects is influenced. The

    entire global lookup scope is searched before the dynam-ically loaded object and its dependencies so that defini-tions would be found first in the global lookup scope ob-ject before definitions in the local lookup scope. If thedynamic linker does the lookup as part of a relocationthis additional dependency is usually taken care of auto-matically, but this cannot be arranged if the user looks upsymbols in the lookup scope with dlsym.

    And usually there is no reason to use RTLD GLOBAL. Forreasons explained later it is always highly advised to cre-ate dependencies with all the DSOs necessary to resolveall references. RTLD GLOBAL is often used to provide im-plementations which are not available at link time of aDSO. Since this should be avoided the need for this flagshould be minimal. Even if the programmer has to jumpthrough some hoops to work around the issues which aresolved by RTLD GLOBAL it is worth it. The pain of debug-ging and working around problems introduced by addingobjects to the global lookup scope is much bigger.

    The dynamic linker in the GNU C library knows sinceSeptember 2004 one more extension. This extension helpsto deal with situations where multiple definitions of sym-bols with the same name are not compatible and there-fore cannot be interposed and expected to work. This isusally a sign of design failures on the side of the peo-ple who wrote the DSOs with the conflicting definitionsand also failure on the side of the application writer whodepends on these incompatible DSOs. We assume herethat an application app is linked with a DSO libone.sowhich defines a symbol duplicate and that it dynami-cally loads a DSO libdynamic.so which depends onanother DSO libtwo.so which also defines a symbolduplicate. When the application starts it might have aglobal scope like this:

    app libone.so libdl.so libc.so

    If now libtwo.so is loaded, the additional local scopecould be like this:

    libdynamic.so libtwo.so libc.so

    This local scope is searched after the global scope, pos-sibly with the exception of libdynamic.so which issearched first for lookups in this very same DSO if theDF DYNAMIC flag is used. But what happens if the sym-bol duplicate is required in libdynamic.so? Afterall we said so far the result is always: the definition inlibone.so is found since libtwo.so is only in the lo-cal scope which is searched after the global scope. If thetwo definitions are incompatible the program is in trou-ble.

    This can be changed with a recent enough GNU C libraryby ORing RTLD DEEPBIND to the flag word passed as the

    10 Version 4.1.2 How To Write Shared Libraries

  • second parameter to dlopen. If this happens, the dy-namic linker will search the local scope before the globalscope for all objects which have been loaded by the callto dlopen. For our example this means the search or-der changes for all lookups in the newly loaded DSOslibdynamic.so and libtwo.so, but not for libc.sosince this DSO has already been loaded. For the two af-fected DSOs a reference to duplicate will now find thedefinition in libtwo.so. In all other DSOs the definitionin libone.so would be found.

    While this might sound like a good solution for handlingcompatibility problems this feature should only be usedif it cannot be avoided. There are several reasons for this:

    The change in the scope affects all symbols and allthe DSOs which are loaded. Some symbols mighthave to be interposed by definitions in the globalscope which now will not happen.

    Already loaded DSOs are not affected which couldcause unconsistent results depending on whetherthe DSO is already loaded (it might be dynamicallyloaded, so there is even a race condition).

    LD PRELOAD is ineffective for lookups in the dy-namically loaded objects since the preloaded ob-jects are part of the global scope, having been addedright after the executable. Therefore they are lookedat only after the local scope.

    Applications might expect that local definitions arealways preferred over other definitions. This (andthe previous point) is already partly already a prob-lem with the use of DF SYMBOLIC but since thisflag should not be used either, the arguments arestill valid.

    If any of the implicitly loaded DSOs is loaded ex-plicitly afterward, its lookup scope will change.

    Lastly, the flag is not portable.

    The RTLD DEEPBIND flag should really only be used asa last resort. Fixing the application to not depend on theflags functionality is the much better solution.

    1.5.5 GOT and PLT

    The Global Offset Table (GOT) and Procedure LinkageTable (PLT) are the two data structures central to the ELFrun-time. We will introduce now the reasons why theyare used and what consequences arise from that.

    Relocations are created for source constructs like

    extern int foo;

    extern int bar (int);int call_bar (void) {

    return bar (foo);}

    The call to bar requires two relocations: one to load thevalue of foo and another one to find the address of bar.If the code would be generated knowing the addresses ofthe variable and the function the assembler instructionswould directly load from or jump to the address. For IA-32 the code would look like this:

    pushl foocall bar

    This would encode the addresses of foo and bar as partof the instruction in the text segment. If the address isonly known to the dynamic linker the text segment wouldhave to be modified at run-time. According to what welearned above this must be avoided.

    Therefore the code generated for DSOs, i.e., when using-fpic or -fPIC, looks like this:

    movl foo@GOT(%ebx), %eaxpushl (%eax)call bar@PLT

    The address of the variable foo is now not part of the in-struction. Instead it is loaded from the GOT. The addressof the location in the GOT relative to the PIC registervalue (%ebx) is known at link-time. Therefore the textsegment does not have to be changed, only the GOT.7

    The situation for the function call is similar. The functionbar is not called directly. Instead control is transferredto a stub for bar in the PLT (indicated by bar@PLT). ForIA-32 the PLT itself does not have to be modified and canbe placed in a read-only segment, each entry is 16 bytesin size. Only the GOT is modified and each entry consistsof 4 bytes. The structure of the PLT in an IA-32 DSOlooks like this:

    .PLT0:pushl 4(%ebx)jmp *8(%ebx)nop; nop

    7There is one more advantage of using this scheme. If the instruc-tion would be modified we would need one relocation per load/storeinstruction. By storing the address in the GOT only one relocation isneeded.

    Ulrich Drepper Version 4.1.2 11

  • nop; nop.PLT1:jmp *name1@GOT(%ebx)

    pushl $offset1jmp .PLT0@PC

    .PLT2:jmp *name2@GOT(%ebx)pushl $offset2jmp .PLT0@PC

    This shows three entries, there are as many as needed,all having the same size. The first entry, labeled with.PLT0, is special. It is used internally as we will see.All the following entries belong to exactly one functionsymbol. The first instruction is an indirect jump wherethe address is taken from a slot in the GOT. Each PLT en-try has one GOT slot. At startup time the dynamic linkerfills the GOT slot with the address pointing to the sec-ond instruction of the appropriate PLT entry. I.e., whenthe PLT entry is used for the first time the jump ends atthe following pushl instruction. The value pushed onthe stack is also specific to the PLT slot and it is the off-set of the relocation entry for the function which shouldbe called. Then control is transferred to the special firstPLT entry which pushes some more values on the stackand finally jumps into the dynamic linker. The dynamiclinker has do make sure that the third GOT slot (offset8) contains the address of the entry point in the dynamiclinker. Once the dynamic linker has determined the ad-dress of the function it stores the result in the GOT entrywhich was used in the jmp instruction at the beginning ofthe PLT entry before jumping to the found function. Thishas the effect that all future uses of the PLT entry will notgo through the dynamic linker, but will instead directlytransfer to the function. The overhead for all but the firstcall is therefore only one indirect jump.

    The PLT stub is always used if the function is not guaran-teed to be defined in the object which references it. Pleasenote that a simple definition in the object with the refer-ence is not enough to avoid the PLT entry. Looking atthe symbol lookup process it should be clear that the def-inition could be found in another object (interposition)in which case the PLT is needed. We will later explainexactly when and how to avoid PLT entries.

    How exactly the GOT and PLT is structured is architecture-specific, specified in the respective psABI. What was saidhere about IA-32 is in some form applicable to someother architectures but not for all. For instance, while thePLT on IA-32 is read-only it must be writable for otherarchitectures since instead of indirect jumps using GOTvalues the PLT entries are modified directly. A readermight think that the designers of the IA-32 ABI made amistake by requiring a indirect, and therefore slower, callinstead of a direct call. This is no mistake, though. Hav-ing a writable and executable segment is a huge securityproblem since attackers can simply write arbitrary codeinto the PLT and take over the program. We can anyhowsummarize the costs of using GOT and PLT like this:

    every use of a global variable which is exporteduses a GOT entry and loads the variable values in-directly;

    each function which is called (as opposed to refer-enced as a variable) which is not guaranteed to bedefined in the calling object requires a PLT entry.The function call is performed indirectly by trans-ferring control first to the code in the PLT entrywhich in turn calls the function.

    for some architectures each PLT entry requires atleast one GOT entry.

    Avoiding a jump through the PLT therefore removes onIA-32 16 bytes of text and 4 bytes of data. Avoiding theGOT when accessing a global variable saves 4 bytes ofdata and one load instruction (i.e., at least 3 bytes of codeand cycles during the execution). In addition each GOTentry has a relocation associated with the costs describedabove.

    1.5.6 Running the Constructors

    Once the relocations are performed the DSOs and the ap-plication code can actually be used. But there is one morething to do: optionally the DSOs and the application mustbe initialized. The author of the code can define for eachobject a number of initialization functions which are runbefore the DSO is used by other code. To perform the ini-tialization the functions can use code from the own objectand all the dependencies. To make this work the dynamiclinker must make sure the objects are initialized in thecorrect order, i.e., the dependencies of an object must beinitialized before the object.

    To guarantee that the dynamic linker has to perform atopological sort in the list of objects. This sorting is nolinear process. Like all sorting algorithms the run-time isat least O(n log n) and since this is actually a topologicalsort the value is even higher. And what is more: sincethe order at startup need not be the same as the orderat shutdown (when finalizers have to be run) the wholeprocess has to be repeated.

    So we have again a cost factor which is directly depend-ing on the number of objects involved. Reducing thenumber helps a bit even though the actual costs are nor-mally much less than that of the relocation process.

    At this point it is useful to look at the way to correctlywrite constructors and destructors for DSOs. Some sys-tems had the convention that exported functions namedinit and fini are automatically picked as constructor

    and destructor respectively. This convention is still fol-lowed by GNU ld and using functions with these nameson a Linux system will indeed cause the functions usedin these capacities. But this is totally, 100% wrong!

    12 Version 4.1.2 How To Write Shared Libraries

  • By using these functions the programmer overwrites what-ever initialization and destruction functionality the sys-tem itself is using. The result is a DSO which is notfully initialized and this sooner or later leads to a catas-trophy. The correct way of adding constructors and de-structors is by marking functions with the constructorand destructor function attribute respectively.

    void__attribute ((constructor))init_function (void){

    ...}

    void__attribute ((destructor))fini_function (void){

    ...}

    These functions should not be exported either (see sec-tions 2.2.2 and 2.2.3) but this is just an optimization.With the functions defined like this the runtime will ar-range that they are called at the right time, after perform-ing whatever initialization is necessary before.

    1.6 Summary of the Costs of ELF

    We have now discussed the startup process and how it isaffected by the form of the binaries. We will now summa-rize the various factors so that we later on can determinethe benefits of an optimization more easily.

    Code Size As everywhere, a reduced size for code withthe same semantics often means higher efficiencyand performance. Smaller ELF binaries need lessmemory at run-time.

    In general the compiler will always generate thebest code possible and we do not cover this further.But it must be known that every DSO includes acertain overhead in data and code. Therefore fewerDSOs means smaller text.

    Number of Objects The fact that a smaller number ofobjects containing the same functionality is bene-ficial has been mentioned in several places:

    Fewer objects are loaded at run-time. Thisdirectly translates to fewer system call. In theGNU dynamic linker implementation loadinga DSO requires at least 8 system calls, all ofthem can be potentially quite expensive.

    Related, the application and the dependencieswith additional dependencies must record thenames of the dependencies. This is not a ter-ribly high cost but certainly can sum up ifthere are many dozens of dependencies.

    The lookup scope grows. This is one of thedominating factors in cost equation for the re-locations.

    More objects means more symbol tables whichin turn normally means more duplication. Un-defined references are not collapsed into oneand handling of multiple definitions have tobe sorted out by the dynamic linker.Moreover, symbols are often exported from aDSO to be used in another one. This wouldnot have to happen if the DSOs would be merged.

    The sorting of initializers/finalizers is morecomplicated.

    In general does the dynamic linker have someoverhead for each loaded DSO per process.Every time a new DSO is requested the list ofalready loaded DSOs must be searched whichcan be quite time consuming since DSOs canhave many aliases.

    Number of Symbols The number of exported and unde-fined symbols determines the size of the dynamicsymbol table, the hash table, and the average hashtable chain length. The normal symbol table is notused at run-time and it is therefore not necessaryto strip a binary of it. It has no impact on perfor-mance.

    Additionally, fewer exported symbols means fewerchances for conflicts when using pre-linking (notcovered further).

    Length of Symbol Strings Long symbol lengths causeoften unnecessary costs. A successful lookup of asymbol must match the whole string and compar-ing dozens or hundreds of characters takes time.Unsuccessful lookups suffer if common prefixesare long as in the new C++ mangling scheme. Inany case do long symbol names cause large stringtables which must be present at run-time and therebyis adding costs in load time and in use of addressspace which is an issue for 32-bit machines.

    Number of Relocations Processing relocations constitutethe majority of work during start and therefore anyreduction is directly noticeable.

    Kind of Relocations The kind of relocations which areneeded is important, too, since processing a rela-tive relocation is much less expensive than a nor-mal relocation. Also, relocations against text seg-ments must be avoided.

    Placement of Code and Data All executable code shouldbe placed in read-only memory and the compiler

    Ulrich Drepper Version 4.1.2 13

  • $ env LD_DEBUG=statistics /bin/echo +++ some text +++...:...: run-time linker statistics:...: total startup time in dynamic loader: 748696 clock cycles...: time needed for relocation: 378004 clock cycles (50.4%)...: number of relocations: 133...: number of relocations from cache: 5...: time needed to load objects: 193372 clock cycles (25.8%)+++ some text +++...:...: run-time linker statistics:...: final number of relocations: 188...: final number of relocations from cache: 5

    Figure 8: Gather Startup Statistics

    normally makes sure this is done correctly. Whencreating data objects it is mostly up to the userto make sure it is placed in the correct segment.Ideally data is also read-only but this works onlyfor constants. The second best choice is a zero-initialized variable which does not have to be ini-tialized from file content. The rest has to go intothe data segment.

    In the following we will not cover the first two pointsgiven here. It is up to the developer of the DSO to de-cide about this. There are no small additional changes tomake the DSO behave better, these are fundamental de-sign decisions. We have voiced an opinion here, whetherit is has any effect remains to be seen.

    1.7 Measuring ld.so Performance

    To perform the optimizations it is useful to quantify theeffect of the optimizations. Fortunately it is very easy todo this with glibcs dynamic linker. Using the LD DEBUGenvironment variable it can be instructed to dump in-formation related to the startup performance. Figure 8shows an example invocation, of the echo program inthis case.

    The output of the dynamic linker is divided in two parts.The part before the programs output is printed right be-fore the dynamic linker turns over control to the appli-cation after having performed all the work we describedin this section. The second part, a summary, is printedafter the application terminated (normally). The actualformat might vary for different architectures. It includesthe timing information only on architectures which pro-vide easy access to a CPU cycle counter register (modernIA-32, IA-64, x86-64, Alpha in the moment). For otherarchitectures these lines are simply missing.

    The timing information provides absolute values for thetotal time spend during startup in the dynamic linker, thetime needed to perform relocations, and the time spend

    in the kernel to load/map binaries. In this example therelocation processing dominates the startup costs withmore than 50%. There is a lot of potential for optimiza-tions here. The unit used to measure the time is CPUcycles. This means that the values cannot even be com-pared across different implementations of the same ar-chitecture. E.g., the measurement for a PentiumRM III anda PentiumRM 4 machine will be quite different. But themeasurements are perfectly suitable to measure improve-ments on one machine which is what we are interestedhere.

    Since relocations play such a vital part of the startup per-formance some information on the number of relocationsis printed. In the example a total of 133 relocations areperformed, from the dynamic linker, the C library, and theexecutable itself. Of these 5 relocations could be servedfrom the relocation cache. This is an optimization imple-mented in the dynamic linker to handle the case of mul-tiple relocations against the same symbol more efficient.After the program itself terminated the same informationis printed again. The total number of relocations here ishigher since the execution of the application code causeda number, 55 to be exact, of run-time relocations to beperformed.

    The number of relocations which are processed is stableacross successive runs of the program. The time mea-surements not. Even in a single-user mode with no otherprograms running there would be differences since thecache and main memory has to be accessed. It is there-fore necessary to average the run-time over multiple runs.

    It is obviously also possible to count the relocations with-out running the program. Running readelf -d on thebinary shows the dynamic section in which the DT RELSZ,DT RELENT, DT RELCOUNT, and DT PLTRELSZ entries areinteresting. They allow computing the number of normaland relative relocations as well as the number of PLT en-tries. If one does not want to do this by hand the relinfoscript in appendix A can be used.

    14 Version 4.1.2 How To Write Shared Libraries

  • 2 Optimizations for DSOs

    In this section we describe various optimizations basedon C or C++ variables or functions. The choice of vari-able or function, unless explicitly said, is made deliber-ately since many of the implementations apply to the oneor the other. But there are some architectures where func-tions are handled like variables. This is mainly the casefor embedded RISC architectures like SH-3 and SH-4which have limitations in the addressing modes they pro-vide which make it impossible to implement the functionhandling as for other architectures. In most cases it isno problem to apply the optimizations for variables andfunctions at the same time. This is what in fact should bedone all the time to achieve best performance across allarchitectures.

    The most important recommendation is to always use-fpic or -fPIC when generating code which ends up inDSOs. This applies to data as well as code. Code whichis not compiled this way almost certainly will contain textrelocations. For these there is no excuse. Text relocationsrequires extra work to apply in the dynamic linker. Andargumentation saying that the code is not shared becauseno other process uses the DSO is invalid. In this case it isnot useful to use a DSO in the first place; the code shouldjust be added to the application code.

    Some people try to argue that the use of -fpic/-fPICon some architectures has too many disadvantages. Thisis mainly brought forward in argumentations about IA-32. Here the use of %ebx as the PIC register deprivesthe compiler of one of the precious registers it could usefor optimization. But this is really not that much of aproblem. First, not having %ebx available was never abig penalty. Second, in modern compilers (e.g., gcc afterrelease 3.1) the handling of the PIC register is much moreflexible. It is not always necessary to use %ebx whichcan help eliminating unnecessary copy operations. Andthird, by providing the compiler with more information asexplained later in this section a lot of the overhead in PICcan be removed. This all combined will lead to overheadwhich is in most situations not noticeable.

    When gcc is used, the options -fpic/-fPIC also tell thecompiler that a number of optimizations which are pos-sible for the executable cannot be performed. This has todo with symbol lookups and cutting it short. Since thecompiler can assume the executable to be the first objectin the lookup scope it knows that all references of globalsymbols known to be defined in the executable are re-solved locally. Access to locally defined variable couldbe done directly, without using indirect access throughthe GOT. This is not true for DSOs: the DSOs can belater in the lookup scope and earlier objects might be in-terposed. It is therefore mandatory to compile all codewhich can potentially end up in a DSO with -fpic/-fPICsince otherwise the DSO might not work correctly. Thereis no compiler option to separate this optimization from

    the generation of position-independent code.

    Which of the two options, -fpic or -fPIC, have to beused must be decided on a case-by-case basis. For somearchitectures there is no difference at all and people tendto be careless about the use. For most RISC there is a bigdifference. As an example, this is the code gcc generatesfor SPARC to read a global variable global when using-fpic:

    sethi %hi(_GLOBAL_OFFSET_TABLE_-4),%l7call .LLGETPC0add %l7,%lo(_GLOBAL_OFFSET_TABLE_+4),%l7ld [%l7+global],%g1ld [%g1],%g1

    And this is the code sequence if -fPIC is used:

    sethi %hi(_GLOBAL_OFFSET_TABLE_-4),%l7call .LLGETPC0add %l7,%lo(_GLOBAL_OFFSET_TABLE_+4),%l7sethi %hi(global),%g1or %g1,%lo(global),%g1ld [%l7+%g1],%g1ld [%g1],%g1

    In both cases %l7 is loaded with the address of the GOTfirst. Then the GOT is accessed to get the address ofglobal. While in the -fpic case one instruction is suf-ficient, three instructions are needed in the -fPIC case.The -fpic option tells the compiler that the size of theGOT does not exceed an architecture-specific value (8kBin case of SPARC). If only that many GOT entries canbe present the offset from the base of the GOT can beencoded in the instruction itself, i.e., in the ld instruc-tion of the first code sequence above. If -fPIC is usedno such limit exists and so the compiler has to be pes-simistic and generate code which can deal with offsets ofany size. The difference in the number of instructions inthis example correctly suggests that the -fpic should beused at all times unless it is absolutely necessary to use-fPIC. The linker will fail and write out a message whenthis point is reached and one only has to recompile thecode.

    When writing assembler code by hand it is easy to misscases where position independent code sequences mustbe used. The non-PIC sequences look and actually aresimpler and more natural. Therefore it is extremely im-portant to in these case to check whether the DSO ismarked to contain text relocations. This is easy enoughto do:

    readelf -d binary | grep TEXTREL

    Ulrich Drepper Version 4.1.2 15

  • If this produces any output text relocations are presentand one better starts looking what causes them.

    2.1 Data Definitions

    Variables can be defined in C and C++ in several differentways. Basically there are three kinds of definitions:

    Common Common variables are more widely used FOR-TRAN but they got used in C and C++ as well towork around mistakes of programmers. Since inthe early days people used to drop the extern key-word from variable definitions, in the same way itis possible to drop it from function declaration, thecompiler often has multiple definitions of the samevariable in different files. To help the poor andclueless programmer the C/C++ compiler normallygenerates common variables for uninitialized defi-nitions such as

    int foo;

    For common variables there can be more than onedefinition and they all get unified into one locationin the output file. Common variables are alwaysinitialized with zero. This means their value doesnot have to be stored in an ELF file. Instead thefile size of a segment is chosen smaller than thememory size as described in 1.4.

    Uninitialized If the programmer uses the compiler com-mand line option -fno-common the generated codewill contain uninitialized variables instead of com-mon variables if a variable definition has no ini-tializer. Alternatively, individual variables can bemarked like this:

    int foo attribute ((nocommon));

    The result at run-time is the same as for commonvariable, no value is stored in the file. But the rep-resentation in the object file is different and it al-lows the linker to find multiple definitions and flagthem as errors. Another difference is that it is pos-sible to define aliases, i.e., alternative names, foruninitialized variables while this is not possible forcommon variables.

    With recent gcc versions there is another method tocreate uninitialized variables. Variables initializedwith zero are stored this way. Earlier gcc versionsstored them as initialized variables which took upspace in the file. This is a bit cumbersome for vari-ables with structured types. So, sticking with theper-variable attribute is probably the best way.

    Initialized The variable is defined and initialized to aprogrammer-defined value. In C:

    int foo = 42;

    In this case the initialization value is stored in thefile. As described in the previous case initializa-

    tions with zero are treated special by some compil-ers.

    Normally there is not much the user has to do to createoptimal ELF files. The compiler will take care of avoid-ing the initializers. To achieve the best results even withold compilers it is desirable to avoid explicit initializa-tions with zero if possible. This creates normally com-mon variables but if combined with gccs -fno-commonflag the same reports about multiple definitions one wouldget for initialized variables can be seen.

    There is one thing the programmer is responsible for. Asan example look at the following code:

    bool is_empty = true;char s[10];

    const char *get_s (void) {return is_empty ? NULL : s;

    }

    The function get s uses the boolean variable is emptyto decide what to do. If the variable has its initial valuethe variable s is not used. The initialization value ofis empty is stored in the file since the initialize is non-zero. But the semantics of is empty is chosen arbitrar-ily. There is no requirement for that. The code couldinstead be rewritten as:

    bool not_empty = false;char s[10];

    const char *get_s (void) {return not_empty ? s : NULL;

    }

    Now the semantics of the control variable is reversed. Itis initialized with false which is guaranteed to have thenumeric value zero. The test in the function get s hasto be changed as well but the resulting code is not less ormore efficient than the old code.

    By simple transformations like that it is often possibleto avoid creating initialized variables and instead usingcommon or uninitialized variables. This saves disk spaceand eventually improves startup times. The transforma-tion is not limited to boolean values. It is sometimes pos-sible to do it for variables which can take on more thantwo values, especially enumeration values. When defin-ing enums one should always put the value, which is mostoften used as initializer, first in the enum definition. I.e.

    16 Version 4.1.2 How To Write Shared Libraries

  • enum { val1, val2, val3 };

    should be rewritten as

    enum { val3, val1, val2 };

    if val3 is the value most often used for initializations.To summarize, it is always preferable to add variablesas uninitialized or initialized with zero as opposed to asinitialized with a value other than zero.

    2.2 Export Control

    When creating a DSO from a collection of object files thedynamic symbol table will by default contain all the sym-bols which are globally visible in the object files. In mostcases this set is far too large. Only the symbols which areactually part of the ABI should be exported. Failing torestrict the set of exported symbols are numerous draw-backs:

    Users of the DSO could use interfaces which theyare not supposed to. This is problematic in revi-sions of the DSO which are meant to be binarycompatible. The correct assumption of the DSOdeveloper is that interfaces, which are not part ofthe ABI, can be changed arbitrarily. But there arealways users who claim to know better or do notcare about rules.

    According to the ELF lookup rules all symbols inthe dynamic symbol table can be interposed (un-less the visibility of the symbol is restricted). I.e.,definitions from other objects can be used. Thismeans that local references cannot be bound at linktime. If it is known or intended that the local defi-nition should always be used the symbol in the ref-erence must not be exported or the visibility mustbe restricted.

    The dynamic symbol table and its string table areavailable at run-time and therefore must be loaded.This can require a significant amount of memory,even though it is read-only. One might think thatthe size is not much of an issue but if one exam-ines the length of the mangled names of C++ vari-ables or functions, it becomes obvious that this isnot the case. In addition we have the run-time costsof larger symbol tables which we discussed in theprevious section.

    We will now present a number of possible solutions forthe problem of exported interfaces. Some of them solvethe same problem in slightly different ways. We will saywhich method should be preferred. The programmer hasto make sure that whatever is used is available on the tar-get system.

    In the discussions of the various methods we will use oneexample:

    int last;

    int next (void) {return ++last;

    }

    int index (int scale) {return next ()

  • static int last;

    static int next (void) {return ++last;

    }

    int index (int scale) {return next ()

  • can prevent bad surprises.8

    2.2.3 Define Per-Symbol Visibility

    Instead of changing the default visibility the programmercan choose to define to hide individual symbols. Or, ifthe default visibility is hidden, make specific symbols ex-portable by setting the visibility to default.

    Since the C language does not provide mechanisms todefine the visibility of a function or variable gcc resortsonce more to using attributes:

    int last__attribute__ ((visibility ("hidden")));

    int__attribute__ ((visibility ("hidden")))next (void) {return ++last;

    }

    int index (int scale) {return next ()

  • compared for equality. This rule would be violated with afast and simple-minded implementation of the protectedvisibility. Assume an application which references a pro-tected function in a DSO. Also in the DSO is anotherfunction which references said function. The pointer inthe application points to the PLT entry for the functionin the applications PLT. If a protected symbol lookupwould simply return the address of the function insidethe DSO the addresses would differ.

    In programming environments without this requirementon function pointers the use of the protected visibilitywould be useful and fast. But since there usually is onlyone implementation of the dynamic linker on the sys-tem and this implementation has to handle C programsas well, the use of protected is highly discouraged.

    There are some exceptions to these rules. It is possibleto create ELF binaries with non-standard lookup scopes.The simplest example is the use of DF SYMBOLIC (or ofDT SYMBOLIC in old-style ELF binaries, see page 25).In these cases the programmer decided to create a non-standard binary and therefore accepts the fact that therules of the ISO C standard do not apply.

    2.2.4 Define Visibility for C++ Classes

    For C++ code we can use the attributes as well but theyhave to be used very carefully. Normal function or vari-able definitions can be handled as in C. The extra namemangling performed has no influence on the visibility.The story is different when it comes to classes. The sym-bols and code created for class definitions are memberfunctions and static data or function members. Thesevariables and functions can easily be declared as hiddenbut one has to be careful. First an example of the syntax.

    class foo {static int u __attribute__

    ((visibility ("hidden")));int a;public:foo (int b = 1);void offset (int n);int val () const __attribute__

    ((visibility ("hidden")));};

    int foo::u __attribute__((visibility ("hidden")));

    foo::foo (int b) : a (b) { }void foo::offset (int n) { u = n; }int__attribute__ ((visibility ("hidden")))foo::val () const { return a + u; }

    In this example code the static data member u and the

    member function val are defined as hidden. The sym-bols cannot be accessed outside the DSO the definitionsappear in. Please note that this is an additional restrictionon top of the C++ access rules. For the member functionsone way around the problem is to instantiate the class inmore than one DSO. This is usually causing no problemsand only adds to code bloat.

    Things are getting more interesting when static data mem-bers or static local variables in member functions are used.In this case there must be exactly one definition used(please note: used, not present). To obey this ruleit is either necessary to not restrict the export of the staticdata member of member function from the DSO or tomake sure all accesses of the data or function are madein the DSO with the definitions. If multiple definitionsare present it is very easy to make mistakes when hidingstatic data members or the member functions with staticvariables since the generated code has no way of knowingthat there are multiple definitions of the variables. Thisleads to very hard to debug bugs.

    In the example code above the static data member u is de-clared hidden. All users of the member must be definedin the same DSO. C++ access rules restrict access only tomember functions, regardless of where they are defined.To make sure all users are defined in the DSO with thedefinition of u it is usually necessary to avoid inline func-tions which access the hidden data since the inline gener-ated code can be placed in any DSO which contains codeusing the class definition. The member function offsetis a prime example of a function which should be inlinedbut since it accesses u it cannot be done. Instead offsetis exported as an interface from the DSO which containsthe definition of u.

    If a member function is marked as hidden, as val is inthe example, it cannot be called from outside the DSO.Note that in the example the compiler allows global ac-cess to the member function since it is defined as a publicmember. The linker, not the compiler, will complain ifthis member function is used outside the DSO with theinstantiation. Inexperienced or not fully informed usersmight interpret this problem as a lack of instantiationwhich then leads to problems due to multiple definitions.

    Because these problems are so hard to debug it is essen-tial to get the compiler involved in making sure the userfollows the necessary rules. The C++ type system is richenough to help if the implementor puts some additionaleffort in it. The key is to mimic the actual symbol accessas closely as possible with the class definition. For thisreason the class definitions of the example above shouldactually look like this:

    class foo {static int u __attribute__

    ((visibility ("hidden")));

    20 Version 4.1.2 How To Write Shared Libraries

  • int a;public:foo (int b = 1);int val () const __attribute__

    ((visibility ("hidden")));void offset (int n);

    };

    class foo_ext : protected foo {public:foo_ext (int b = 1) : foo (b) { }void offset (int n)

    { return foo::offset (n); }};

    The class foo is regarded as a private class, not to beused outside the DSO with the instantiation. The publicinterface would be the class foo ext. It provides accessto the two public interfaces of the underlying class. Aslong as the users of the DSO containing the definitionsrespect the requirement that only foo ext can be usedthere is no way for the compiler not noticing accesses tofoo::u and foo::val outside the DSO containing thedefinitions.

    Template class and functions are not different. The syn-tax is the same. Non-inline function definitions get yetagain less readable but that is something which can bemostly hidden with a few macros.

    templateclass a {T u;public:a (T a = 0);T r () const __attribute__

    ((visibility ("hidden")));};

    template a::a (T a){ u = a; }template T__attribute__ ((visibility ("hidden")))a::r () const { return u; }

    For templatized classes the problems of making sure thatif necessary only one definition is used is even harder tofix due to the various approaches to instantiation.

    One sort of function which can safely be kept local andnot exported are inline function, either defined in the classdefinition or separately. Each compilation unit must haveits own set of all the used inline functions. And all thefunctions from all the DSOs and the executable better bethe same and are therefore interchangeable. It is possibleto mark all inline functions explicitly as hidden but this is

    a lot of work. Since version 4.0 gcc knows about the op-tion -fvisibility-inlines-hidden which does justwhat is wanted. If this option is used a referenced in-line function is assumed to be hidden and an out-of-linecopy of the function is marked with STV HIDDEN. I.e., ifthe function is not inlined the separate function created isnot exported. This is a quite frequent situation at timessince not all functions the programmer thinks should beinlined are eligible according to the compilers analysis.This option is usable in almost all situations. Only if thefunctions in the different DSOs can be different or if thecode depends on exactly one copy of the function everbeing used (e.g., if the function address is expected to bethe same) should this option be avoided.

    If a C++ class is used only for the implementation andnot used in any interface of a DSO using the code, thenit would be possible to mark each member function andstatic data element as hidden. This is cumbersome, error-prone, and incomplete, though. There is perhaps a largenumber of members which need to be marked and when anew member is added it is easy to forget about adding thenecessary attributes. The incompleteness stems from thefact that the C++ compiler automatically generates a fewmembers functions such are constructors and destructors.These member functions would not be affected by theattributes.

    The solution to these problems is to explicitly determinethe visibility of the entire class. Since version 4.0 doesgcc have support for this. There are two ways to achievethe goal. First, the already mentioned pragma can beused.

    #pragma GCC visibility push(hidden)class foo {...

    };#pragma GCC visibility pop

    All member functions and static data members of fooare automatically defined as hidden. This extends even toimplicitly generated functions and operators if necessary.

    The second possibility is to use yet another extension ingcc 4.0. It is possible to mark a function as hidden whenit is defined. The syntax is this:

    class __attribute ((visibility ("hidden")))foo {...

    };

    Just as with the pragma, all defined functions are defined

    Ulrich Drepper Version 4.1.2 21

  • as hidden symbols. Explicitly using attributes should bepreferred since the effect of the pragmas is not alwaysobvious. If the push and pop lines are far enough fromeach other a programmer might accidentally add a newdeclaration in the range even though the visibility of thisnew declaration is not meant to be affected. Both, thepragma and the class attribute, should only be used ininternal headers. In the headers which are used to ex-pose the API of the DSO it makes no sense to have themsince the whole point is to hide the implementation de-tails. This means it is always a good idea to differentiatebetween internal and external header files.

    Defining entire classes with hidden visibility has someproblems which cannot be modeled with sophisticatedclass layout or moving the definition in private headers.For exception handling the compiler generates data struc-tures (typeinfo symbols) which are also marked ac-cording to the visibility attribute used. If an object ofthis type is thrown the catch operation has to look forthe typeinfo information. If that information is in adifferent DSO the search will be unsuccessful and theprogram will terminate. All classes which are used inexception handling and where the throw and catch arenot both guaranteed to reside in the DSO with the defi-nition must be declared with default visibility. Individualmembers can still be marked with an visibility attributebut since the typeinfo data is synthesized by the com-piler on command there is no way for the programmer tooverwrite a hidden visibility attribute associated with theclass.

    The use of the most restrictive visibility possible can beof big benefit for C++ code. Each inline function whichis (also) available as a stand-alone function, every syn-thesized function are variable has a symbol associatedwhich is by default exported. For templatized classes thisis even worse, since each instantiated class can bring ismany more symbols. It is best to design the code rightaway so that the visibility attributes can be applied when-ever possible. Compatibility with older compilers caneasily be achieved by using macros.

    2.2.5 Use Export Maps

    If for one reason or another none of the previous two so-lutions are applicable the next best possibility is to in-struct the linker to do something. Only the GNU andSolaris linker are known to support this, at least with thesyntax presented here. Using export maps is not onlyuseful for the purpose discussed here. When discussingmaintenance of APIs and ABIs in chapter 3 the same kindof input file is used. This does not mean the previous twomethods should not be preferred. Instead, export (andsymbol) maps can and should always be used in additionto the other methods described.