SwordHolder
SwordHolder
Once I set '-- input_scale_size' not to 64, an error will be reported. It seems that the discriminator cannot adapt to the size of the input image. How did you...
I check the shapes of output tensors. When input size is 64 *64: torch.Size([2, 64, 32, 32]) torch.Size([2, 64, 32, 32]) torch.Size([2, 128, 16, 16]) torch.Size([2, 128, 16, 16]) torch.Size([2,...
Declining the batch size < the max number of your dataset.
"--sample_per_image 1", it will be OK
@vtgist @happsky @shadow111
can you solve it?
Thanks, but get the error as follows ``` File "G:\Project\A_NST\PytorchWCT-master\util.py", line 28, in __init__ vgg1 = pytorch_lua_wrapper(args.vgg1) File "G:\Project\A_NST\PytorchWCT-master\util.py", line 18, in __init__ self.lua_model = torchfile.load(lua_path) File "D:\anaconda3\envs\mypytorch\lib\site-packages\torchfile.py", line 424,...
solution is here https://github.com/bshillingford/python-torchfile/issues/12
> Hello, did you solve this problem? I have the same error. This is because different versions of Pytorch have different methods for calculating convolutional kernel sizes and padding. I...
> File "./src/model/SADRNv2.py", line 98, in forward x = self.block1(x) File "/data/Downloads/CondaAg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call _impl result = self.forward(*input, **kwargs) File "./src/model/modules.py", line 540, in forward out += identity...