Skip to main content

Using GPUs with IDEs

Conveyor IDEs can run on instances that have GPUs.

When a node using an IDE is launched, you can check the status of the GPU using the nvidia-smi command. For example:

conveyor:~/workspace$ nvidia-smi
Mon Sep 22 13:34:45 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.181 Driver Version: 570.181 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Tesla T4 Off | 00000000:00:1E.0 Off | 0 |
| N/A 28C P8 15W / 70W | 0MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+

The command shows you the GPU detected, memory usage, and the GPU utilization.

To start using it you can run a pytorch test program, first install the pytorch dependencies in the terminal:

pip install torch torchaudio torchvision

Then we will download the mnist example:

curl https://raw.githubusercontent.com/pytorch/examples/refs/heads/main/mnist/main.py -o main.py

Then we can run it:

python main.py

While it is running you can check the GPU utilization using the nvidia-smi command in a second terminal. To have it automatically refresh you can use the watch command:

watch nvidia-smi