Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
448 views
in Technique[技术] by (71.8m points)

c++ - GCC: --whole-archive recipe for static linking to pthread stopped working in recent gcc versions

Static linking against pthread is a difficult topic on Linux. It used to work to wrap -lpthread as -Wl,--whole-archive -lpthread -Wl,--no-whole-archive (the details can be found in this answer).

The effect was that symbols (for pthread) were strong, not weak. Since around Ubuntu 18.04 (between gcc 5.4.0 and gcc 7.4.0) that behavior seemed to have changed, and pthread symbols now always end up as weak symbols independent of the --whole-archive option.

Because of that, the -whole-archive recipe stopped working. The intention of my question is to understand what has changed recently in the toolchain (compiler, linker, standard libray), and what can be done to get the old behavior back.

Example:

#include <mutex>

int main(int argc, char **argv) {
  std::mutex mutex;
  mutex.lock();
  mutex.unlock();
  return 0;
}

In all following examples, the same compilation command was used:

g++ -std=c++11 -Wall -static simple.cpp  -Wl,--whole-archive -lpthread  -Wl,--no-whole-archive

Before, when compiling with -static, pthread symbols (e.g., pthread_mutex_lock) were strong (marked as T by nm), but now they are weak (W):

Ubuntu 14.04: docker run --rm -it ubuntu:14.04 bash

$ apt-get update
$ apt-get install g++

$ g++ --version
g++ (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4

$ nm a.out | grep pthread_mutex_lock
0000000000408160 T __pthread_mutex_lock
00000000004003e0 t __pthread_mutex_lock_full
0000000000408160 T pthread_mutex_lock

Ubuntu 16.04: docker run --rm -it ubuntu:16.04 bash

$ g++ --version
g++ (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609

$ nm a.out | grep pthread_mutex_lock
00000000004077b0 T __pthread_mutex_lock
0000000000407170 t __pthread_mutex_lock_full
00000000004077b0 T pthread_mutex_lock

Ubuntu 18.04: docker run --rm -it ubuntu:18.04 bash

$ g++ --version
g++ (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0

$ nm ./a.out  | grep pthread_mutex_lock
0000000000407010 T __pthread_mutex_lock
00000000004069d0 t __pthread_mutex_lock_full
0000000000407010 W pthread_mutex_lock

To sum it up:

  • Ubuntu 14.04 & 16.04: T pthread_mutex_lock (strong symbol)
  • Ubuntu 18.04: W pthread_mutex_lock (weak symbol)

In a more complex example, this can lead to Segmentation faults. For example, in this code (the unmodified file can be found here):

#include <pthread.h>
#include <thread>
#include <cstring>
#include <iostream>
#include <mutex>
#include <thread>
#include <vector>

std::mutex mutex;

void myfunc(int i) {
    mutex.lock();
    std::cout << i << " " << std::this_thread::get_id() << std::endl << std::flush;
    mutex.unlock();
}

int main(int argc, char **argv) {
    std::cout << "main " << std::this_thread::get_id() << std::endl;
    std::vector<std::thread> threads;
    unsigned int nthreads;

    if (argc > 1) {
        nthreads = std::strtoll(argv[1], NULL, 0);
    } else {
        nthreads = 1;
    }

    for (unsigned int i = 0; i < nthreads; ++i) {
        threads.push_back(std::thread(myfunc, i));
    }
    for (auto& thread : threads) {
        thread.join();
    }
}

Attempts to produce a static binary failed, for example:

$ g++ thread_get_id.cpp -Wall -std=c++11 -O3 -static -pthread -Wl,--whole-archive -lpthread -Wl,--no-whole-archive
$ ./a.out
Segmentation fault (core dumped)

I tried to drop -O3, switching to clang++, switch to the Gold linker, etc. But it always crashes. From my understanding the reason for the crashes in the static binary is that essential functions (such as pthread_mutex_lock) do not end up as strong symbols. Thus, they are missing in the final binary, leading to runtime errors.

Apart from Ubuntu 18.04, I could also reproduce the same behavior on Arch Linux with gcc 10.0.0. However, on Ubuntu 14.04 and 16.04, the static binaries could be created and executed without any errors.

Questions:

  • What changed in the build toolchain (between gcc 5.4.0 and gcc 7.4.0)? (Wild guess: I saw a pthread cleanup for C11 that falls in that time. Maybe that is the reason?)
  • Is it a regression, or is the old workaround no longer correct.
  • If it is no regression, what should be done instead to allow static linking to pthread?
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

New workaround: -Wl,--whole-archive -lrt -lpthread -Wl,--no-whole-archive


As pointed out by Federico, adding -lrt prevents the crash. The whole issue is almost certainly related to librt, which is the Realtime Extensions library. Its timing functions (e.g., clock_gettime, clock_nanosleep) are used to implement threads.

Between Ubuntu 16.04 and 18.04, there were changes in glibc related to these functions as well. I could not figure out the details, but there are comments in the code:

/* clock_nanosleep moved to libc in version 2.17; old binaries may expect the symbol version it had in librt. */

Also for a newer commit message:

commit 79a547b162657b3fa34d31917cc29f0e7af19e4c
Author: Adhemerval Zanella
Date: Tue Nov 5 19:59:36 2019 +0000

nptl: Move nanosleep implementation to libc

Checked on x86_64-linux-gnu and powerpc64le-linux-gnu. I also checked the libpthread.so .gnu.version_d entries for every ABI affected and all of them contains the required versions (including for architectures which exports __nanosleep with a different version).

To sum it up, the workaround is to add -lrt. Note that in some examples (not here), the ordering is relevant. From the tests in gcc and some other discussion, I got the impression that first linking against librt causes less problems then linking after pthread. (In one example, only -lpthread -lrt -lpthread seemed to have worked, but it is not clear why.)


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...