Pytorch-Memory-Utils icon indicating copy to clipboard operation
Pytorch-Memory-Utils copied to clipboard

No detailed use info, only Total Tensor Used Memory

Open Adam-fei opened this issue 4 years ago • 1 comments

Hi, I'm using your code with: torch 1.10.0+cu113

I used the example code as follow:

import torch
import inspect

from torchvision import models
from gpu_mem_track import MemTracker  # 引用显存跟踪代码

device = torch.device('cuda:0')

frame = inspect.currentframe()     
gpu_tracker = MemTracker(frame)      # 创建显存检测对象

gpu_tracker.track()                  # 开始检测


dummy_tensor_1 = torch.randn(30, 3, 512, 512).float().to(device)  # 30*3*512*512*4/1000/1000 = 94.37M
dummy_tensor_2 = torch.randn(40, 3, 512, 512).float().to(device)  # 40*3*512*512*4/1000/1000 = 125.82M
dummy_tensor_3 = torch.randn(60, 3, 512, 512).float().to(device)  # 60*3*512*512*4/1000/1000 = 188.74M

gpu_tracker.track()                  # 开始检测

and got the output in the txt file as follow:

GPU Memory Track | 27-Oct-21-18:37:56 | Total Tensor Used Memory:0.0 Mb Total Allocated Memory:0.0 Mb At run.py line 12: Total Tensor Used Memory:0.0 Mb Total Allocated Memory:0.0 Mb At run.py line 19: Total Tensor Used Memory:390.0 Mb Total Allocated Memory:390.0 Mb

As you mentioned in Readme.md, there should be detailed information about each tensor's gpu usage (which line begins with '+').

What should I do?

Adam-fei avatar Oct 27 '21 10:10 Adam-fei

I just figured it out. I followed the code in your blog

https://oldpan.me/archives/pytorch-gpu-memory-usage-track

After comparing above code with the code in the github Readme.md, I changed

frame = inspect.currentframe()     
gpu_tracker = MemTracker(frame)      # 创建显存检测对象

into

gpu_tracker = MemTracker()      # 创建显存检测对象

And it output the detailed tensor gpu usage.

Adam-fei avatar Oct 27 '21 11:10 Adam-fei