LalchandPandia
LalchandPandia
Thanks for the reply!. My doubt is which variant of Llama 2 (either chat or text-completion)?
@loadams I have changed the title
It is resolved. In the yaml file change ${global_seed} to ${variables.global_seed}
Hi, Either way is fine. Would it be possible to update the README to include flash-attention (does it require alibi) version required to run the fine-tuning GPU example along with...
A follow-up on the task. I can see that the input id is of the form: id(Question): - id(Options) -\n\n... \n id(Answer): . And label contains -100 for all the...
Thanks for confirming that. Just a suggestion regarding the README in scripts/eval section. It will be a good to have a section where it is described how under the hood...
It does not work with local path. I have fine-tuned llama 7B model and updated tokenizer t contain extra tokens. basically I wan to evaluate arc_challenge on this model composer...
Once the model is loaded, the temperature is set to 0.6 by default and do_sample=True. I verified it by looking at the output of model.__dict__. So, the warning gets triggered...
Thanks for the quick reply. When do_sample is set to False, the temperature parameter is not used at all. Even though it seems to be set to 0.6, it would...