Use GPU Jupyter Images

This section will guide you through creating a new notebook server using one of our GPU-enabled images.

What You’ll Need

Procedure

  1. Create a new notebook server.

    1. Use the following Kale Docker image:

      gcr.io/arrikto/jupyter-kale-gpu-py38:<IMAGE_TAG>

      Note

      The <IMAGE_TAG> varies based on the MiniKF or EKF release.

      This image comes with the CUDA toolkit pre-installed.

    2. Enter the number of GPU devices you need in the GPUs section of the Jupyter Web App:

      ../../../_images/gpu1.png
  1. Create a new notebook server.

    1. Use the following Kale Docker image:

      gcr.io/arrikto/jupyter-kale-gpu-tf-py38:<IMAGE_TAG>

      Note

      The <IMAGE_TAG> varies based on the MiniKF or EKF release.

      This image is using the GPU generic one as its base image, adding the following libraries:

      • cuDNN
      • cuBLAS
      • cuFFT
      • cuSPARSE
      • cuRAND
      • cuSOLVER
      • NVRTC
    2. Enter the number of GPU devices you need in the GPUs section of the Jupyter Web App:

      ../../../_images/gpu1.png
  1. Create a new notebook server.

    1. Use any of the Jupyter Kale images. For example:

      gcr.io/arrikto/jupyter-kale-py38:<IMAGE_TAG>

      Note

      The <IMAGE_TAG> varies based on the MiniKF or EKF release.

    2. Enter the number of GPU devices you need in the GPUs section of the Jupyter Web App:

      ../../../_images/gpu1.png
  2. Start a new terminal inside the notebook server.

  3. Install the CUDA version of PyTorch:

    jovyan@gpu-0:~$ pip3 install torch==1.10.0+cu113 \ > -f https://download.pytorch.org/whl/cu113/torch_stable.html

    PyTorch bundles all the CUDA libraries as part of its PyPI package, so no system-level libraries are required.

    Note

    Please head to the PyTorch website for the latest releases.

Verify

  1. Start a new terminal inside your notebook server.

  2. Verify that the notebook can consume the GPU devices you requested:

    jovyan@gpu-0:~$ nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.73.01 Driver Version: 460.73.01 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 33C P8 26W / 149W | 0MiB / 11441MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+

    You should get an output similar to the above.

  1. Start a new notebook (IPYNB file) from inside your notebook server.

  2. Verify that TensorFlow can consume the GPU devices:

    import tensorflow as tf tf.config.list_physical_devices('GPU')

    You should see an output similar to:

    [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
  1. Start a new notebook (IPYNB file) from inside your notebook server.

  2. Verify that PyTorch can consume the GPU devices:

    import torch torch.cuda.is_available()

    Which should return True.

Summary

You have successfully launched a notebook server using one of our GPU-enabled images.

What’s Next

Check out the rest of the Kale user guides.