FS#79019 - [python-pytorch] ROCm no longer available after 5.6.0 rebuild

Attached to Project: Arch Linux
Opened by hexchain (hexchain) - Sunday, 09 July 2023, 03:07 GMT
Last edited by Torsten Keßler (tpkessler) - Tuesday, 11 July 2023, 07:27 GMT
Task Type Bug Report
Category Packages: Testing
Status Closed
Assigned To Sven-Hendrik Haase (Svenstaro)
Konstantin Gizdov (kgizdov)
Torsten Keßler (tpkessler)
Architecture All
Severity Low
Priority Normal
Reported Version
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 0
Private No


The python-pytorch{,-opt}-rocm packages seem to have lost ROCm support after rebuilding for 5.6.0.

Additional info:

python-pytorch-rocm 2.0.1-6
python-pytorch-opt-rocm 2.0.1-6

Steps to reproduce:

In an IPython shell:

In [1]: import torch

In [2]: d = torch.device('cuda')

In [3]: torch.tensor([[False, False]], device=d)
AssertionError Traceback (most recent call last)
Cell In[3], line 1
----> 1 torch.tensor([[False, False]], device=d)

File /usr/lib/python3.11/site-packages/torch/cuda/__init__.py:239, in _lazy_init()
235 raise RuntimeError(
236 "Cannot re-initialize CUDA in forked subprocess. To use CUDA with "
237 "multiprocessing, you must use the 'spawn' start method")
238 if not hasattr(torch._C, '_cuda_getDeviceCount'):
--> 239 raise AssertionError("Torch not compiled with CUDA enabled")
240 if _cudart is None:
241 raise AssertionError(
242 "libcudart functions unavailable. It looks like you have a broken build?")

AssertionError: Torch not compiled with CUDA enabled
This task depends upon

Closed by  Torsten Keßler (tpkessler)
Tuesday, 11 July 2023, 07:27 GMT
Reason for closing:  Fixed
Comment by Torsten Keßler (tpkessler) - Sunday, 09 July 2023, 09:51 GMT
I fixed ROCm detection locally. The updated package will be added to [extra-testing] later this day.
Comment by Torsten Keßler (tpkessler) - Sunday, 09 July 2023, 16:47 GMT
Can you confirm that this is fixed with python-pytorch-opt{,-rocm}2.0.1-7?