ProGamerGov
ProGamerGov
I removed `self.output` and the resulting output image was the exact same as with `self.output`. I also can't find any mention of anything similar to `self.output` being required in PyTorch....
I have 2 new sets of GPU benchmarks: **GRID K520:** Command | Time --- | --- LBFGS nn | 236 seconds LBFGS cudnn | 226 seconds LBFGS cudnn autotune |...
I recreated the color-independent style transfer function, using only Python's PIL/Pillow library: ``` # Combine the Y channel of the generated image and the UV/CbCr channels of the # content...
I have published the Neural-Style-PT code as it's own project now: https://github.com/ProGamerGov/neural-style-pt * It's been tested with both Python 2 and 3. * PyTorch v0.4 or later is required. I...
@vibber Your strategy could have something to do with it. But we would need to know what GPUs you are using, how much memory they have, what layers you are...
Testing Cuda 8.0 (`cuda-repo-ubuntu1604_8.0.61-1_amd64.deb`, CUDA Toolkit 8.0 GA2 (Feb 2017)) with `th neural_style.lua -gpu 0 -backend cudnn`, and `cudnn-8.0-linux-x64-v5.0-ga.tgz`, seems to use more memory as well. This is interesting compared...
Another setup for comparison: `Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-79-generic x86_64)` `cuda-repo-ubuntu1404_7.5-18_amd64.deb`, `cudnn-7.0-linux-x64-v4.0-prod.tgz` Cuda 7.5, and cuDNN v4 Setup memory usage: ``` ubuntu@ip-Address:~/neural-style$ nvidia-smi Wed Oct 25 23:34:49 2017 +------------------------------------------------------+ |...
An interesting side effect of changing the Cuda, Cudnn, Torch7 versions, and maybe even the Ubuntu versions), is that the seed value effects seems to change. So if you use...
Another way to slightly lower memory usage, seems to be possible by stripping layers from a VGG model: https://github.com/jcjohnson/neural-style/issues/428#issuecomment-370185610
@flaushi Are you refering to instance normalization from [fast-neural-style](https://github.com/jcjohnson/fast-neural-style)?