danieltanhx

Results 7 comments of danieltanhx

and need to set the initial value for --lambda_ds because of line 95 and line 143 of core/solver.py # remember the initial value of ds weight initial_lambda_ds = args.lambda_ds

try to modify the Dataset class just like in the link->https://github.com/Hramchenko/simplified_pix2pixHD/blob/main/simplified_pix2pixHD.ipynb

import os os.environ[“WANDB_DISABLED”] = “true” will do the job https://discuss.huggingface.co/t/how-to-turn-wandb-off-in-trainer/6237

think it's related to pytorch version issues, just modify sections of the code will do ... require_grad_params.append(_module._modules[name].lora_up.parameters()) require_grad_params.append(_module._modules[name].lora_down.parameters()) wt_tensor_type=_module._modules[name].lora_up.weight.dtype if loras != None: _module._modules[name].lora_up.weight.data = loras.pop(0).to(wt_tensor_type) _module._modules[name].lora_down.weight.data = loras.pop(0).to(wt_tensor_type) ...

it's related to https://github.com/cloneofsimo/lora/commit/4869fe3426ea98607084c86cdc9b4785d67a5f6d. For those who don't want to deal with it, can do git checkout 4869fe3426ea98607084c86cdc9b4785d67a5f6d once cd into lora folder and restart again

this is a show stopper. Cannot proceed further using google colab or kaggle since their VRAM limited to 16GB

found the bug "PLS remove model = model.merge_and_unload() and reuse the original 4bit base model instead of the fp16 base model". Details is in https://github.com/artidoro/qlora/issues/254 compute_dtype = getattr(torch, bnb_4bit_compute_dtype) bnb_config...