Bohumír Zámečník
Bohumír Zámečník
Should I build also 1.13?
Hi @bmartinn! Thanks for the reply. Ahh, so a background process for reporting would likely explain that. Setting `sdk.development.report_use_subprocess: false` avoid the problem. Thanks! Anyway I'm working on transitioning our...
Yes, it was easy and consistent. You can use the script above with the dependencies stated there. Possibly within a tensorflow docker image or something. Good luck with fixing the...
Now I have realized I likely posted this issue in the wrong project. It should not be clearml-agent, but clearml (client).
Problems: - `DaskStream` is missing specific `flatten()`, so core `Stream.flatten()` is used. It produces core streams instead of dask streams. - Even if we attach `flatten` to `DaskStream` (same way...
As said in: https://github.com/Kozea/WeasyPrint/issues/79#issuecomment-28835056: ``` $ export DYLD_FALLBACK_LIBRARY_PATH=/opt/local/lib ``` Works with macports and Python 3.4. Can be put into `~/.profile`.
Yes, I observe similar behavior. For single GPU (GTX1070) the time / epoch converges to 4 s, for 2 GPUs to 6 s, whereas the optimum would be 2 s....
I found an interesting observation and actually was able to make the Keras model parallelize well! The basic model has to be placed on cpu:0 device. By default it's placed...
Ahh, in case of 1 GPU, we should leave it on gpu:0, not cpu:0. Hm... so with this fixed, the basic model on 1 GPU runs at 2 s/ epoch...
It seems to me that the problem might be caused by the fact that only predictions are computed in parallel. Then they are moved to the parameter server (cpu:0 or...