Rusty Mina
Rusty Mina
fixed in #137
Hi. Sorry for the very late reply. Pretrained models are hosted [here](https://github.com/rrmina/fast-neural-style-pytorch/tree/master/transforms) and [here](https://drive.google.com/open?id=1m9g1PP7gPo-jPfRDxzdGozMzftu3az6P)
You can start by modifying the hardcoded paths of style images in https://github.com/rrmina/fast-neural-style-pytorch#training-style-transformation-network. I also prepared a [notebook for training](https://colab.research.google.com/github/rrmina/fast-neural-style-pytorch/blob/master/notebook/mosaic_TransformerNetwork.ipynb). You just need to change the style image, and if...
Hi sorry about that. Seems like I forgot to upload the video.py with the ffmpeg option. Unfortunately it's in my other hardware, and I'm out of the country right now....
Hi @mixuala! Seems like there's a problem with the earlier versions of Pillow. As @osmarcedron suggested, this problem could be solved by upgrading your Pillow to its latest version (i.e....
They are basically the same except for 2 major things: 1. This repo uses the original VGG networks used in the [original fast-neural-style paper](https://arxiv.org/abs/1603.08155). In contrast, the Pytorch Team example...
I performed my experiments on another laptop, which I currently do not have access on right now. I am only assuming that these are the ones used in training: [Bayanihan...
You may use it however you like :)
As far as I know, torchvision's VGG assumes RGB image input of pixel values [0,1] in contrast to the original VGG weights which assumes BGR image input with pixel values...
Anyway, someone has already reported this issue to me. #10 I have uploaded the VGG16 weights here: https://drive.google.com/file/d/1a0sFcNEvmIy21PE0yp7tzJuU0vhhV0Ln/view?usp=sharing I will also update the readme as well as the notebooks with...