Pau Rodriguez

Results 41 comments of Pau Rodriguez

I have never tried with 152 layers, but it seems normal that memory blows up when using so many layers. Did you find any possible answer?

Hi @kirk86 there was an update on train.py that wasn't reflected in test.py. I have updated it. Thanks for pointing it out :) https://github.com/prlz77/ResNeXt.pytorch/blob/master/test.py#L73

Yes, it is right since these numbers are divided by the `groups` of the convolution.

Please make sure that you are executing with the correct commandline parameters. For ``--cardinality 32 --widen_factor 4 --depth 50 --base_width 4`` I get: ``` (stage_1): Sequential( (stage_1_bottleneck_0): ResNeXtBottleneck( (conv_reduce): Conv2d(64,...

Resnext bottlenecks are a bit different, if you ask for a base width of 64 and a cardinality of 8, this is 64*8 = 512. These 512 will be divided...

Hi! Have you tried with a vanilla resnet? to check whether the problem is in the model or the dataloader? Pau

Just write the code to convert MNIST into a sequence of 28x28 pixels and feed them into the LSTM as done in the examples :) For instance in the sum...

28x28 I mean 28*28, not two-dimensional, but one-dimensional.

I have not tried but you could just change this: https://github.com/prlz77/orthoreg/blob/master/orthoreg_pytorch.py#L18, and reshape the linear layer to have the same shape as a `1x1` convolution (for compatibility with the current...