Yibo Jin
Yibo Jin
### 请提出你的问题 Please ask your question 请问,paddleslim下的recover_inference_program貌似提供了将推理模型转为训练模型的途径,请问能否给出一个demo来演示一下使用方法?
tried to reform my dataset image ids to ints,but there are still errors. How should I make it right? By the way, I used VisDrone2019-Det Traceback (most recent call last):...
在paddleslim中的recover_inference_program貌似提供了推理模型转训练模型的功能,但是没有找到具体的应用实例,可以给一个简单的demo演示一下该功能如何使用并进行训练吗?
With one config deployed with knowledge_distillation algorithm for compression at /examples/torch/classification/configs/resnet34_pruning_geometric_median_kd.json,it works without obvious difference between 20epochs' training and 100 epochs'. How could I confirm that kd works after pruning...
model architecture seems to be changed with some unexpected modules after prunning, activation function has changed as well. How can I implement channel prunning without such changes carried? changed to...
### Reminder - [X] I have read the README and searched the existing issues. ### Reproduction SHELL脚本 ` GAS=${GAS:-2} LR=${LR:-1e-3} STEP=${STEP:-250} CARD=${CARD:-4} TRY_NUM=${TN:-1} FINE_TUNING_TYPE=lora FINE_TUNING_ARGS="--lora_target q_proj,v_proj" MODEL_VENDER='meta-llama' MODEL_NAME=${MN:-Yi-34b-chat-hf} TEMPLATE_TYPE='yi' echo...
I finetuned bloom with loar and would like to quantize the model with GPTQ, ` self.model = AutoModelForCausalLM.from_pretrained( self.config['checkpoint_path'], device_map='auto', ) #load adpater self.model = PeftModelForCausalLM.from_pretrained(self.model, '/tmp/bloom_ori/lora_bloom')` some errors happened...