FS#62916 - [python-tensorflow-opt-cuda] is missing "opt" and "cuda" functionality
Attached to Project:
Community Packages
Opened by John S. Smith (potatoe) - Monday, 17 June 2019, 05:34 GMT
Last edited by Sven-Hendrik Haase (Svenstaro) - Wednesday, 19 June 2019, 04:17 GMT
Opened by John S. Smith (potatoe) - Monday, 17 June 2019, 05:34 GMT
Last edited by Sven-Hendrik Haase (Svenstaro) - Wednesday, 19 June 2019, 04:17 GMT
|
Details
Description:
python-tensorflow-opt-cuda-1.14.0rc1-1 doesn't seem to have CPU optimizations or CUDA GPU support built in. It logs an info message about not having CPU optimizations compiled in when running, claims to have not been built with CUDA support, and can't find any GPUs. Downgrading to python-tensorflow-opt-cuda-1.13.1-5 resolves the problems. I don't entirely follow the PKGBUILD, but is it placing the different variants like "-opt-cuda"'s python wheels each in their own directories like "${srcdir}"/tmpcudaopt but then using a hardcoded find command in _python_package() to always locate a python wheel under the non-opt-non-cuda "${srcdir}"/tmp? I do notice the latest release of all the python-tensorflow-* packages are the same size as each other, maybe the problem is in the packaging step? Additional info: * python-tensorflow-opt-cuda-1.14.0rc1-1 Steps to reproduce: $ python Python 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf >>> tf.test.is_gpu_available() 2019-06-16 23:18:21.226916: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2019-06-16 23:18:21.254531: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3400735000 Hz 2019-06-16 23:18:21.255252: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55c65c5a8540 executing computations on platform Host. Devices: 2019-06-16 23:18:21.255281: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined> False >>> tf.test.is_built_with_cuda() False |
This task depends upon
Closed by Sven-Hendrik Haase (Svenstaro)
Wednesday, 19 June 2019, 04:17 GMT
Reason for closing: Fixed
Wednesday, 19 June 2019, 04:17 GMT
Reason for closing: Fixed
(Oh, have found the watching button ... sry ...)