assertionerror torch not compiled with cuda enabled – Code Example

Total
0
Shares

In this article we will see the code solutions for Pytorch assertionerror torch not compiled with cuda enabled.

Why this error occurs?

Cuda is a toolkit which allows GPU to take charge of applications and increase the performance. In order to work with it, it’s essential to have Cuda supported Nvidia GPU installed in your system. Also Pytorch should also support GPU acceleration.

This assertionerror occurs when we try to use cuda on Pytorch version which is for CPU only. So, you have two options to resolve this error –

  1. Use Pytorch version which is compatible to Cuda. Download right stable version from here.
  2. Disable Cuda from your code. This could turn out to be tricky as you might not be using Cuda directly but some of the library in your project may. So, you need to troubleshoot that.

Code Example

Error Code – Let’s first reproduce the error –

1. cuda passed as function parameter

import torch

my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device="cuda")
print(my_tensor)

The above code will throw error – assertionerror: torch not compiled with cuda enabled. Here is the complete output –

Traceback (most recent call last):
  File "C:/Users/aka/project/test.py", line 3, in <module>
    my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device="cuda")
  File "C:\Users\aka\anaconda3\envs\deeplearning\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

This is because we set the flag device="cuda". If we change it to cpu like device="cpu" then the error will disappear.


2. Dependency using pytorch function with cuda enabled

There are many pytorch functions which copy data to Cuda memory for faster performance. They are generally disabled by default but some dependency of your project could be using those functions and enabling them. So, you need to look into that dependency and disable from there.

For example, torch.utils.data.DataLoader class has parameter pin_memory which, according to pytorch documentation says –

pin_memory (bool, optional) – If True, the data loader will copy Tensors into device/CUDA pinned memory before returning them. 

If a function using this class and setting pin_memory=true, then we will get torch not compiled with cuda enabled error.


Solutions

1. Check Pytorch version

First of all check if you have installed the right version. Pytorch is available with or without Cuda.

PyTorch versions with and without CUDA

2. Check if Cuda is available in installed Pytorch

Use this code to check if cuda is available in your installed Pytorch –

print(torch.cuda.is_available())

3. Create new project environment

Due to a lot of troubleshooting and error handling to resolve bugs, we break our project environment. Try creating a new environment if it solves your Cuda error.

4. Using .cuda() function

Some pytorch functions could be run on GPU by passing them through .cuda(). For example, neural network sequential() function could be run on cuda. So, append or remove it according to your use case –

model = nn.Sequential(OrderedDict([
          ('conv1', nn.Conv2d(1,20,5)),
          ('relu1', nn.ReLU()),
          ('conv2', nn.Conv2d(20,64,5)),
          ('relu2', nn.ReLU())
        ])).cuda()

5. Provide correct device parameter

If a function expects a device parameter then you may provide cuda or cpu according to your use case –

import torch

my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device="cpu")

print(my_tensor)