njzxj
njzxj
+1,我也遇到了,哥们有解决吗
> Hi [@njzxj](https://github.com/njzxj) , have you fixed this problem? I also encountered this problem when finetuning wna2.1-t2v-1.3b on my own dataset. When I set lora_alpha to 1, the results are...
> Hi [@njzxj](https://github.com/njzxj) , thanks for your prompt reply. By the way, does "reasonableness of the prompt words" means that the prompt should be aligned with the video. Could you...
Change in models.lora.GeneralLoRAFromPeft: def load(self, model, state_dict_lora, lora_prefix="", alpha=1.0, model_resource=""): state_dict_model = model.state_dict() device, dtype, computation_device, computation_dtype = self.fetch_device_and_dtype(state_dict_model) lora_name_dict = self.get_name_dict(state_dict_lora) for name in lora_name_dict: weight_up = state_dict_lora[lora_name_dict[name][0]].to(device=computation_device, dtype=computation_dtype)...
I also encountered this problem.