kiseliu
kiseliu
Sorry, I just found that the model file I used is from PaddleNLP ``` wget https://bj.bcebos.com/paddlenlp/models/transformers/plato2/24L.pdparams. ``` This problem is solved, thanks a lot. But there has another question. Could...
I found that I can use the following code to transform the model file "./24L/Plato" to dygraph format. ``` 282 fluid.io.load_vars( 283 self.exe, 284 model_path, 285 main_program=self.program, 286 predicate=__predicate__) 287...
> are you using depacoda/llama-7b-hf? and the exact same training command as in readme? yes, the exact same training command as in readme
**I have the same question**. Yes, the separate computation for LoRA is minimal. However, in the inference phase, the weights of LoRA will be added to the pruned weights. Therefore,...
@yuzc19 Yes, I also tried the default domain weights like you, this is the entire eval results on Squad_v2:  And I re-generate the domain reweights for considering the different...