WSL 2 GPU Support for Docker Desktop on NVIDIA GPUs

It’s been a year since Ben wrote about Nvidia support on Docker Desktop. At that time, it was necessary to participate in the Windows Insider Program, use the Beta CUDA drivers, and use the Docker Desktop Technology Preview build. Today everything has changed:

  • On the operating system side, Windows 11 users can now enable their GPU without participating in the Windows Insider Program. Windows 10 users still need to register.
  • Nvidia CUDA drivers released.
  • Finally, GPU support is integrated into Docker Desktop (in fact since version 3.1).

Nvidia used the term Native to describe the expected performance.

Where to find Docker images

Base Docker images hosted at https://hub.docker.com/r/nvidia/cuda. The original project is located at https://gitlab.com/nvidia/container-images/cuda.

what it contains

the nvidia-smi The tool allows users to query information about accessible devices.

$ docker run -it --gpus=all --rm nvidia/cuda:11.4.2-base-ubuntu20.04 nvidia-smi
Tue Dec  7 13:25:19 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.00       Driver Version: 510.06       CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| N/A    0C    P0    13W /  N/A |    132MiB /  4096MiB |     N/A      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------

the dmon The nvidia-smi function allows you to monitor the parameters of the GPU:

$ docker exec -ti $(docker ps -ql) bash
root@7d3f4cbdeabb:/src# nvidia-smi dmon
# gpu   pwr gtemp mtemp    sm   mem   enc   dec  mclk  pclk
# Idx     W     C     C     %     %     %     %   MHz   MHz
    0    29    69     -     -     -     0     0  4996  1845
    0    30    69     -     -     -     0     0  4995  1844

The nbody utility is a CUDA sample that provides a benchmarking mode.

$ docker run -it --gpus=all --rm nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -benchmark
...
> 1 Devices used for simulation
GPU Device 0: "Turing" with compute capability 7.5

> Compute 7.5 CUDA device: [NVIDIA GeForce GTX 1650 Ti]
16384 bodies, total time for 10 iterations: 25.958 ms
= 103.410 billion interactions per second
= 2068.205 single-precision GFLOP/s at 20 flops per interaction

A quick CPU comparison indicates a different order of magnitude in performance. GPU 2000 times faster:

> Simulation with CPU
4096 bodies, total time for 10 iterations: 3221.642 ms
= 0.052 billion interactions per second
= 1.042 single-precision GFLOP/s at 20 flops per interaction

What can you do with a semi-virtual GPU?

Turn on the encryption tools

Using a GPU is of course useful when operations can be greatly balanced. This is the case for segmentation analysis. dizcza She hosted her nvidia-docker-based images for the hashcat on the Docker hub. this picture magically Works on Docker Desktop!

$ docker run -it --gpus=all --rm dizcza/docker-hashcat //bin/bash
root@a6752716788d:~# hashcat -I
hashcat (v6.2.3) starting in backend information mode

clGetPlatformIDs(): CL_PLATFORM_NOT_FOUND_KHR

CUDA Info:
==========

CUDA.Version.: 11.6

Backend Device ID #1
  Name...........: NVIDIA GeForce GTX 1650 Ti
  Processor(s)...: 16
  Clock..........: 1485
  Memory.Total...: 4095 MB
  Memory.Free....: 3325 MB
  PCI.Addr.BDFe..: 0000:01:00.0

From there the hashcat standard can be run

hashcat -b
...
Hashmode: 0 - MD5
Speed.#1.........: 11800.8 MH/s (90.34ms) @ Accel:64 Loops:1024 Thr:1024 Vec:1
Hashmode: 100 - SHA1
Speed.#1.........:  4021.7 MH/s (66.13ms) @ Accel:32 Loops:512 Thr:1024 Vec:1
Hashmode: 1400 - SHA2-256
Speed.#1.........:  1710.1 MH/s (77.89ms) @ Accel:8 Loops:1024 Thr:1024 Vec:1
...

draw fractals

The project at https://github.com/jameswmccarty/CUDA-Fractal-Flames uses CUDA to generate fractals. There are two steps to build and run on Linux. Let’s see if we can run it on Docker Desktop. A simple Dockerfile without anything fancy helps with that.

# syntax = docker/dockerfile:1.3-labs
FROM nvidia/cuda:11.4.2-base-ubuntu20.04
RUN apt -y update
RUN DEBIAN_FRONTEND=noninteractive apt -yq install git nano libtiff-dev cuda-toolkit-11-4
RUN git clone --depth 1 https://github.com/jameswmccarty/CUDA-Fractal-Flames /src
WORKDIR /src
RUN sed 's/4736/1024/' -i fractal_cuda.cu # Make the generated image smaller
RUN make

And then we can build and run:

$ docker build . -t cudafractal
$ docker run --gpus=all -ti --rm -v ${PWD}:/tmp/ cudafractal ./fractal -n 15 -c test.coeff -m -15 -M 15 -l -15 -L 15

We note that --gpus=all Available only for run ordering. Cannot add GPU intensive steps during a file build.

Here is an example image:

machine learning

Well really, looking at GPU usage without looking at machine learning would be a waste. the tensorflow:latest-gpu The image can take advantage of the GPU in Docker Desktop. I will simply direct you to the Anca blog earlier this year. I described a tensorflow example and posted it to the cloud: https://www.docker.com/blog/deploy-gpu-accelerated-applications-on-amazon-ecs-with-docker-compose/

Conclusion: What are the benefits for developers?

At Docker, we want to provide a turnkey solution for developers to seamlessly implement their workflows:

Dukercon 2022

Join us at DockerCon2022 on Tuesday, May 10th. DockerCon is a free, one-day virtual event that is a unique experience for developers and development teams building the next generation of modern applications. If you want to learn how to move from code to the cloud quickly and how to solve your development challenges, DockerCon 2022 offers live content to help you build, share, and run your applications. Register today at https://www.docker.com/dockercon/


Leave a Comment