Torch-Pruning icon indicating copy to clipboard operation
Torch-Pruning copied to clipboard

when I try pruning Yolov9,pruner do not work normally.

Open ChangYuance opened this issue 11 months ago • 7 comments

我的代码再val_dual.py下进行了一些修改 I modefy the val_dual in yolov9 like followings 在# Configure model.eval() 下加入了pruning code

    # here do model pruning
################################################################################
# Pruning
import torch_pruning as tp1
example_inputs = torch.randn(1, 3, 640, 640).to(device)
imp = tp1.importance.MagnitudeImportance(p=2) # L2 norm pruning

ignored_layers = []
from models.yolo import Detect
for m in model.modules():
    if isinstance(m, torch.nn.Conv2d) and m.out_channels == 1:
        ignored_layers.append(m) # DO NOT prune the final classifier!
print(ignored_layers)
iterative_steps = 1 # progressive pruning
pruner = tp1.pruner.MagnitudePruner(
    model,
    example_inputs,
    importance=imp,
    iterative_steps=iterative_steps,
    pruning_ratio=0.5, # remove 50% channels, ResNet18 = {64, 128, 256, 512} => ResNet18_Half = {32, 64, 128, 256}
    ignored_layers=ignored_layers,
)
base_macs, base_nparams = tp1.utils.count_ops_and_params(model, example_inputs)
pruner.step()
pruned_macs, pruned_nparams = tp1.utils.count_ops_and_params(model, example_inputs)
print(model)
print("Before Pruning: MACs=%f G, #Params=%f G"%(base_macs/1e6, base_nparams/1e6))
print("After Pruning: MACs=%f G, #Params=%f G"%(pruned_macs/1e6, pruned_nparams/1e6))

但是会运行报错 Got error Exception has occurred: AssertionError Dependency graph relies on autograd for tracing. Please check and disable the torch.no_grad() in your code. File "/home/chang_yuance/yolov9/val_dual_pruning.py", line 156, in run model,

    example_inputs,

    importance=imp,

    iterative_steps=iterative_steps,

    pruning_ratio=0.5, # remove 50% channels, ResNet18 = {64, 128, 256, 512} => ResNet18_Half = {32, 64, 128, 256}

    ignored_layers=ignored_layers,

)

base_macs, base_nparams = tp1.utils.count_ops_and_params(model, example_inputs)
     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/home/chang_yuance/yolov9/val_dual_pruning.py", line 399, in main run(**vars(opt)) File "/home/chang_yuance/yolov9/val_dual_pruning.py", line 426, in main(opt) AssertionError: Dependency graph relies on autograd for tracing. Please check and disable the torch.no_grad() in your code.

I find smart_inference_mode def smart_inference_mode(torch_1_9=check_version(torch.version, '1.9.0')): # Applies torch.inference_mode() decorator if torch>=1.9.0 else torch.no_grad() decorator def decorate(fn): return (torch.inference_mode if torch_1_9 else torch.no_grad)()(fn) return decorate l try to #@smart_inference_mode() 我尝试注释了 @smart_inference_mode()这一行,似乎是引起报错的原因 这个代码似乎引起了报错去掉之后,pruner还是无法正常起作用,在进行构建的pruner时 then after that pruner = tp1.pruner.MagnitudePruner( model, example_inputs, importance=imp, iterative_steps=iterative_steps, pruning_ratio=0.5, # remove 50% channels, ResNet18 = {64, 128, 256, 512} => ResNet18_Half = {32, 64, 128, 256} ignored_layers=ignored_layers, ) 很快就成功了,但是可以注意到pruner里面长这这样 It workshowever pruner like

pruner <torch_pruning.pruner.algorithms.magnitude_based_pruner.MagnitudePruner object at 0x7ff481059b50> special variables function variables DG = <torch_pruning.dependency.DependencyGraph object at 0x7ff45d6cfb50> current_step = 0 global_pruning = False head_pruning_ratio = 0.0 head_pruning_ratio_dict = {} ignored_layers = [Conv2d(64, 1, kernel_size=(1, 1), stride=(1, 1)), Conv2d(64, 1, kernel_size=(1, 1), stride=(1, 1)), Conv2d(64, 1, kernel_size=(1, 1), stride=(1, 1)), Conv2d(64, 1, kernel_size=(1, 1), stride=(1, 1)), Conv2d(64, 1, kernel_size=(1, 1), stride=(1, 1)), Conv2d(64, 1, kernel_size=(1, 1), stride=(1, 1)), Conv2d(16, 1, kernel_size=(1, 1), stride=(1, 1), bias=False), Conv2d(16, 1, kernel_size=(1, 1), stride=(1, 1), bias=False)] ignored_params = [] importance = <torch_pruning.pruner.importance.MagnitudeImportance object at 0x7ff4791bd590> in_channel_groups = {Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=4): 4, Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), groups=4): 4} init_num_heads = {} initial_total_channels = 0 initial_total_heads = 0 isomorphic = False iterative_steps = 1 layer_init_in_ch = {Conv2d(16, 1, kernel_size=(1, 1), stride=(1, 1), bias=False): 16} layer_init_out_ch = {Conv2d(16, 1, kernel_size=(1, 1), stride=(1, 1), bias=False): 1} 这与我复现yolov7的剪枝代码结构完全不同。我需要做什么才能保证model能够成功被pruner加载呢? 目前的剪枝前后模型大小没有任何改变 And nothing change after pruning.

Before Pruning: MACs=5373.508000 M, #Params=2.616950 M After Pruning: MACs=5373.508000 M, #Params=2.616950 M

ChangYuance avatar Feb 26 '25 17:02 ChangYuance

I had a similar issue. But enabling the requires_grad as True for all the model parameters, I got my model pruned. You may try it as well.

for name, param in model.named_parameters(): param.requires_grad = True # Enable gradient computation for all parameters

karthikiitm87 avatar May 30 '25 15:05 karthikiitm87

I had a similar issue. But enabling the requires_grad as True for all the model parameters, I got my model pruned. You may try it as well.

for name, param in model.named_parameters(): param.requires_grad = True # Enable gradient computation for all parameters

anyway, did you run on colab?

alexxony avatar Jun 20 '25 13:06 alexxony

I had a similar issue. But enabling the requires_grad as True for all the model parameters, I got my model pruned. You may try it as well.

for name, param in model.named_parameters(): param.requires_grad = True # Enable gradient computation for all parameters

anyway, did you run on colab?

Not really but in a Jupyter notebook.

karthikiitm87 avatar Jun 20 '25 14:06 karthikiitm87

I had a similar issue. But enabling the requires_grad as True for all the model parameters, I got my model pruned. You may try it as well. for name, param in model.named_parameters(): param.requires_grad = True # Enable gradient computation for all parameters

anyway, did you run on colab?

Not really but in a Jupyter notebook.

ok, so it works well now??

alexxony avatar Jun 20 '25 14:06 alexxony

I had a similar issue. But enabling the requires_grad as True for all the model parameters, I got my model pruned. You may try it as well. for name, param in model.named_parameters(): param.requires_grad = True # Enable gradient computation for all parameters

anyway, did you run on colab?

Not really but in a Jupyter notebook.

ok, so it works well now??

Yes. It works perfectly fine now.

karthikiitm87 avatar Jun 20 '25 14:06 karthikiitm87

I had a similar issue. But enabling the requires_grad as True for all the model parameters, I got my model pruned. You may try it as well. for name, param in model.named_parameters(): param.requires_grad = True # Enable gradient computation for all parameters

anyway, did you run on colab?

Not really but in a Jupyter notebook.

ok, so it works well now??

Yes. It works perfectly fine now.

can I get your notebook please?

alexxony avatar Jun 20 '25 14:06 alexxony

Cool. Thank for your reply. Now i am not study in this area.

---Original--- From: @.> Date: Fri, Jun 20, 2025 22:19 PM To: @.>; Cc: "Chang @.@.>; Subject: Re: [VainF/Torch-Pruning] when I try pruning Yolov9,pruner do not work normally. (Issue #463)

karthikiitm87 left a comment (VainF/Torch-Pruning#463)

I had a similar issue. But enabling the requires_grad as True for all the model parameters, I got my model pruned. You may try it as well. for name, param in model.named_parameters(): param.requires_grad = True # Enable gradient computation for all parameters

anyway, did you run on colab?

Not really but in a Jupyter notebook.

ok, so it works well now??

Yes. It works perfectly fine now.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

ChangYuance avatar Jun 20 '25 14:06 ChangYuance