Geza Velkey
Geza Velkey
No, I tried it and found out some cuda ops are not compatible. For quick solution I merged different versions of tensorflow to have the functionality I need.
Hi, after compiling both tensorflow_fold and tensorflow from sources, it is working fine with tf version 1.3. It's quite tricky to get it to work, because the tensorflow version which...
Try exporting the variable replacing "usr/bin/python3" with your own python path: `export PYTHON_BIN_PATH="/usr/bin/python3"`
Newest version (with tf1.4) has the following error: tensorflow.python.framework.errors_impl.NotFoundError: ***/python3env/lib/python3.5/site-packages/tensorflow_fold/loom/_deserializing_weaver_op.so: undefined symbol: _ZN10tensorflow10DEVICE_CPUE Built from sources with bazel 6.1 with fully recursive cloning (tried bazel 5.4 too).
Run the container with `docker run -it --gpus all` or `docker run -it --runtime=nvidia`.
Hi @chrischoy I've managed to put together a minimal repro code (works on both RTX2070 8GB and Tesla V100 16GB). ```python import gc import traceback import MinkowskiEngine as ME import...
I think this is further convoluted as `active.large_pool.current` is a torch builtin memory reporter and shows that there are some objects in the large pool. However there are no variables...
Reproduced the issue with the code snippet above using the latest PyTorch (v1.9.0-rc3) using ME master and CUDA11.2: ``` nvcc: NVIDIA (R) Cuda compiler driver Cuda compilation tools, release 11.2,...
@suyunzzz this issue hasn't been solved yet, what I did is a wrapper which checks `active.large_pool.current` and if the number of the pool grows (signals a memleak), I save all...