PyTorch-Encoding
PyTorch-Encoding copied to clipboard
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 3.82 GiB total capacity; 2.20 GiB already allocated; 27.88 MiB free; 2.32 GiB reserved in total by PyTorch)
How can I pass the batch_size to an script that is making use of encoding python package?
(torchenc) mona@goku:~$ python test_torch_encoding.py --batch_size 8
Traceback (most recent call last):
File "test_torch_encoding.py", line 15, in <module>
output = model.evaluate(img)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch_encoding-1.2.2b20210130-py3.8-linux-x86_64.egg/encoding/models/sseg/base.py", line 101, in evaluate
pred = self.forward(x)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch_encoding-1.2.2b20210130-py3.8-linux-x86_64.egg/encoding/models/sseg/deeplab.py", line 47, in forward
c1, c2, c3, c4 = self.base_forward(x)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch_encoding-1.2.2b20210130-py3.8-linux-x86_64.egg/encoding/models/sseg/base.py", line 96, in base_forward
c3 = self.pretrained.layer3(c2)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch_encoding-1.2.2b20210130-py3.8-linux-x86_64.egg/encoding/models/backbone/resnet.py", line 95, in forward
out = self.conv2(out)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch_encoding-1.2.2b20210130-py3.8-linux-x86_64.egg/encoding/nn/splat.py", line 50, in forward
x = self.bn0(x)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 131, in forward
return F.batch_norm(
File "/home/mona/venv/torchenc/lib/python3.8/site-packages/torch/nn/functional.py", line 2056, in batch_norm
return torch.batch_norm(
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 3.82 GiB total capacity; 2.20 GiB already allocated; 27.88 MiB free; 2.32 GiB reserved in total by PyTorch)
code is:
(torchenc) mona@goku:~$ cat test_torch_encoding.py
import torch
import encoding
# Get the model
model = encoding.models.get_model('DeepLab_ResNeSt269_PContext', pretrained=True).cuda()
model.eval()
# Prepare the image
url = 'https://github.com/zhanghang1989/image-data/blob/master/' + \
'encoding/segmentation/pcontext/2010_001829_org.jpg?raw=true'
filename = 'example.jpg'
img = encoding.utils.load_image(encoding.utils.download(url, filename)).cuda().unsqueeze(0)
# Make prediction
output = model.evaluate(img)
predict = torch.max(output, 1)[1].cpu().numpy() + 1
# Get color pallete for visualization
mask = encoding.utils.get_mask_pallete(predict, 'pascal_voc')
mask.save('output.png')
I am not sure how you pass batch_size to code but I just followed https://github.com/zhanghang1989/PyTorch-Encoding/issues/224 would your code work at all with a GeForce 1650 Ti GPU?
Thanks a lot for your help.
- I get the error even with batch size of 1:
(torchenc) mona@goku:~$ python test_torch_encoding.py --batch_size 1
The gpu you're using may be too small to train the models. You may try to reduce the --crop-size