Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.4k views
in Technique[技术] by (71.8m points)

cuda - Using GPU from a docker container?

I'm searching for a way to use the GPU from inside a docker container.

The container will execute arbitrary code so i don't want to use the privileged mode.

Any tips?

From previous research i understood that run -v and/or LXC cgroup was the way to go but i'm not sure how to pull that off exactly

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Regan's answer is great, but it's a bit out of date, since the correct way to do this is avoid the lxc execution context as Docker has dropped LXC as the default execution context as of docker 0.9.

Instead it's better to tell docker about the nvidia devices via the --device flag, and just use the native execution context rather than lxc.

Environment

These instructions were tested on the following environment:

  • Ubuntu 14.04
  • CUDA 6.5
  • AWS GPU instance.

Install nvidia driver and cuda on your host

See CUDA 6.5 on AWS GPU Instance Running Ubuntu 14.04 to get your host machine setup.

Install Docker

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update && sudo apt-get install lxc-docker

Find your nvidia devices

ls -la /dev | grep nvidia

crw-rw-rw-  1 root root    195,   0 Oct 25 19:37 nvidia0 
crw-rw-rw-  1 root root    195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw-  1 root root    251,   0 Oct 25 19:37 nvidia-uvm

Run Docker container with nvidia driver pre-installed

I've created a docker image that has the cuda drivers pre-installed. The dockerfile is available on dockerhub if you want to know how this image was built.

You'll want to customize this command to match your nvidia devices. Here's what worked for me:

 $ sudo docker run -ti --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm tleyden5iwx/ubuntu-cuda /bin/bash

Verify CUDA is correctly installed

This should be run from inside the docker container you just launched.

Install CUDA samples:

$ cd /opt/nvidia_installers
$ ./cuda-samples-linux-6.5.14-18745345.run -noprompt -cudaprefix=/usr/local/cuda-6.5/

Build deviceQuery sample:

$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ make
$ ./deviceQuery   

If everything worked, you should see the following output:

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs =    1, Device0 = GRID K520
Result = PASS

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...