blocksparse icon indicating copy to clipboard operation
blocksparse copied to clipboard

blocksparse_ops.so: undefined symbol: __cudaPushCallConfiguration

Open HaonanJi opened this issue 5 years ago • 2 comments

Traceback (most recent call last): File "example.py", line 1, in from blocksparse.matmul import BlocksparseMatMul File "/home/shs/tensorflow/lib/python3.6/site-packages/blocksparse/init.py", line 3, in from blocksparse.utils import ( File "/home/shs/tensorflow/lib/python3.6/site-packages/blocksparse/utils.py", line 17, in _op_module = tf.load_op_library(os.path.join(data_files_path, 'blocksparse_ops.so')) File "/home/shs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 61, in load_op_library lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: /home/shs/tensorflow/lib/python3.6/site-packages/blocksparse/blocksparse_ops.so: undefined symbol: __cudaPushCallConfiguration

My installation environment:Ubuntu 18.04,python 3.6,cuda 10.0,TensorFlow 1.13.1 Has anyone installed successfully? I want to know your installation environment

HaonanJi avatar Jun 22 '20 09:06 HaonanJi

same issue with python 3.6, cuda 9.0, TF 1.12.0. Any updates?

SHUMKASHUN avatar Sep 28 '20 15:09 SHUMKASHUN

I was able to install this successfully using the following Dockerfile

FROM tensorflow/tensorflow:1.15.2-gpu-py3
RUN pip install --upgrade pip
RUN pip install tensorflow-gpu==1.13.1
RUN pip install blocksparse
RUN apt-get update && apt-get install -y git

RUN pip install numpy

ENV LD_LIBRARY_PATH="/usr/local/lib:${LD_LIBRARY_PATH}"

You will need to run this on a machine that has gpu support, I do something like the following:

  1. Build image
$ docker image build -f Dockerfile --rm -t blocksparse:local-test .
  1. Start docker container with an interactive terminal
$ docker run -it --gpus all --privileged -w /working_dir -v ${PWD}:/working_dir --rm blocksparse:local-test
  1. Clone a repo and test things out (inside the docker container)
# git clone https://github.com/openai/sparse_attention.git
# cd sparse_attention
# python attention.py
  1. The last command should print out a bunch of comparisons of different attention computations like (your values will probably be different)
[[-0.3679951  -0.9736524  -1.1470914  ... -0.7502491   1.7886044
   2.0551844 ]
 [-0.32885888 -0.46474323 -0.18094796 ... -0.7302598   1.7354789
   1.9019647 ]
 [-0.27032933 -0.73024327 -0.7691894  ... -0.39576375  1.5935584
   1.0116633 ]
 ...
 [ 0.02976435 -0.11742727  0.01418951 ... -0.10638023  0.07174537
   0.03092784]
 [ 0.01779505 -0.06010491  0.02491665 ...  0.03946434  0.01072227
   0.00312277]
 [ 0.0350467  -0.04547511  0.12625584 ... -0.05710992 -0.01609943
   0.01978281]]
[[-0.3679951  -0.9736524  -1.1470914  ... -0.7502491   1.7886044
   2.0551844 ]
 [-0.3282577  -0.46437025 -0.18160973 ... -0.7317322   1.7389911
   1.9060333 ]
 [-0.2696323  -0.7283177  -0.76687837 ... -0.3962904   1.5957208
   1.0126692 ]
 ...
 [ 0.02980766 -0.11718501  0.01445344 ... -0.10646665  0.07180683
   0.03117247]
 [ 0.017544   -0.0603593   0.02475612 ...  0.03959378  0.01022455
   0.00346022]
 [ 0.03462171 -0.04566976  0.12611446 ... -0.05693993 -0.01591101
   0.0195502 ]]
-----
[[-0.3679951  -0.9736524  -1.1470914  ... -0.7502491   1.7886044
   2.0551844 ]
 [-0.32885888 -0.46474323 -0.18094796 ... -0.7302598   1.7354789
   1.9019647 ]
 [-0.27032933 -0.73024327 -0.7691894  ... -0.39576375  1.5935584
   1.0116633 ]
 ...
 [ 0.39040905 -0.5011199   0.2583578  ... -0.24182324 -0.14207074
   0.15471636]
 [ 0.14268962 -0.41821516  0.4971423  ... -0.11212976 -0.24807553
   0.30565676]
 [ 0.38659394 -0.55715287  0.13594493 ... -0.02801949 -0.4929833
  -0.21489574]]
[[-0.3679951  -0.9736524  -1.1470914  ... -0.7502491   1.7886044
   2.0551844 ]
 [-0.3282577  -0.46437025 -0.18160973 ... -0.7317322   1.7389911
   1.9060333 ]
 [-0.2696323  -0.7283177  -0.76687837 ... -0.3962904   1.5957208
   1.0126692 ]
 ...
 [ 0.3904302  -0.5008209   0.25895226 ... -0.24194057 -0.14191052
   0.15578455]
 [ 0.14293717 -0.41770864  0.49594995 ... -0.11204463 -0.2478415
   0.30615327]
 [ 0.38669968 -0.5576693   0.13532233 ... -0.02754044 -0.4933645
  -0.21508685]]
-----
[[-0.3679951  -0.9736524  -1.1470914  ... -0.7502491   1.7886044
   2.0551844 ]
 [-0.3043723  -0.14633137  0.42354402 ... -0.34101373  0.70098054
  -1.0816377 ]
 [ 0.5673642   0.88623375  1.5387738  ...  0.19378656  1.5196654
  -0.1626653 ]
 ...
 [ 0.04942241 -0.02222104  0.40979457 ...  0.4023287   0.61972946
  -0.01886918]
 [-0.37272486  0.3828089  -0.21167774 ...  0.05512205 -0.12143688
   0.4487638 ]
 [ 0.19268154 -0.6230093   0.86368597 ... -0.4408719  -0.13862456
  -0.2680474 ]]
[[-0.3679951  -0.9736524  -1.1470914  ... -0.7502491   1.7886044
   2.0551844 ]
 [-0.3043723  -0.14633137  0.42354402 ... -0.34101373  0.70098054
  -1.0816377 ]
 [ 0.5673642   0.88623375  1.5387738  ...  0.19378656  1.5196654
  -0.1626653 ]
 ...
 [ 0.05058741 -0.023841    0.40939885 ...  0.4017997   0.6197964
  -0.01840959]
 [-0.37163514  0.38091698 -0.2116487  ...  0.05345442 -0.12248411
   0.4477802 ]
 [ 0.1945247  -0.6238745   0.8666851  ... -0.43898395 -0.13815913
  -0.2674556 ]]
-----
[[-0.3679951  -0.9736524  -1.1470914  ... -0.7502491   1.7886044
   2.0551844 ]
 [-0.3282577  -0.46437025 -0.18160973 ... -0.7317322   1.7389911
   1.9060333 ]
 [-0.2696323  -0.7283177  -0.76687837 ... -0.3962904   1.5957208
   1.0126692 ]
 ...
 [ 0.08744653 -0.18383025  0.10990164 ... -0.06614267  0.20240559
   0.03253727]
 [ 0.09201825 -0.05140796  0.07223736 ... -0.16409338  0.14083761
   0.08396164]
 [ 0.08996055 -0.13643157  0.1574355  ... -0.0945722  -0.05276572
   0.13679293]]

jlozano avatar Dec 30 '20 06:12 jlozano