FS#62916 - [python-tensorflow-opt-cuda] is missing "opt" and "cuda" functionality

Attached to Project: Community Packages
Opened by John S. Smith (potatoe) - Monday, 17 June 2019, 05:34 GMT
Last edited by Sven-Hendrik Haase (Svenstaro) - Wednesday, 19 June 2019, 04:17 GMT
Task Type Bug Report
Category Packages
Status Closed
Assigned To Sven-Hendrik Haase (Svenstaro)
Architecture All
Severity Medium
Priority Normal
Reported Version
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 1
Private No

Details

Description:
python-tensorflow-opt-cuda-1.14.0rc1-1 doesn't seem to have CPU optimizations or CUDA GPU support built in. It logs an info message about not having CPU optimizations compiled in when running, claims to have not been built with CUDA support, and can't find any GPUs. Downgrading to python-tensorflow-opt-cuda-1.13.1-5 resolves the problems.

I don't entirely follow the PKGBUILD, but is it placing the different variants like "-opt-cuda"'s python wheels each in their own directories like "${srcdir}"/tmpcudaopt but then using a hardcoded find command in _python_package() to always locate a python wheel under the non-opt-non-cuda "${srcdir}"/tmp? I do notice the latest release of all the python-tensorflow-* packages are the same size as each other, maybe the problem is in the packaging step?


Additional info:
* python-tensorflow-opt-cuda-1.14.0rc1-1

Steps to reproduce:
$ python
Python 3.7.3 (default, Mar 26 2019, 21:43:19)
[GCC 8.2.1 20181127] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.test.is_gpu_available()
2019-06-16 23:18:21.226916: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-06-16 23:18:21.254531: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3400735000 Hz
2019-06-16 23:18:21.255252: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55c65c5a8540 executing computations on platform Host. Devices:
2019-06-16 23:18:21.255281: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
False
>>> tf.test.is_built_with_cuda()
False
This task depends upon

Closed by  Sven-Hendrik Haase (Svenstaro)
Wednesday, 19 June 2019, 04:17 GMT
Reason for closing:  Fixed
Comment by Jingbei Li (Petron) - Monday, 17 June 2019, 10:33 GMT
(Just for subscription of this bug ...)
(Oh, have found the watching button ... sry ...)
Comment by Sven-Hendrik Haase (Svenstaro) - Monday, 17 June 2019, 10:57 GMT
Well, I fucked up. Rebuilding now.

Loading...