J0eky
J0eky
Hi, I have checked the gSig according to your suggestions, they are both [4,4]. @j-friedrich
@ydneysay @Zhiwei-Zhai @kewang-seu Hi, have you solved the problem? In my case, the inference time increased after pruning.
@MrJoratos Hi, have you solved the problem?
@VainF Hi, after updating torch-pruning from version 1.3.1 to 1.3.3, I didn't encounter the bug I mentioned earlier, and the pruning program could run without any issue. However, I noticed...
here is my code import torch import torch.nn as nn from ultralytics import YOLO import torch_pruning as tp from ultralytics.nn.modules import Pose def prune(): # load trained yolov8x model model...
@VainF Hi, after updating torch-pruning to 1.3.3, the pruner.step() in yolov8_pruning.py didn't execute.
> @VainF Hi, after updating torch-pruning from version 1.3.1 to 1.3.3, I didn't encounter the bug I mentioned earlier, and the pruning program could run without any issue. However, I...
@ztfmars hi ,when I do double enter after input: xtuner chat LLM-Research/Meta-Llama-3-8B-Instruct \ --visual-encoder ./clip-vit-large-patch14-336 \ --llava ./LLM-Research/llava-llama-3-8b \ --prompt-template llama3_chat \ --image ./test.jpg.the following error occurs: double enter to...
hi, have you solved it? @Ataraxy33