Aptha K S
Aptha K S
> thank you for the answers , i have another question sorry for bothering you i only don't get it how to load my own data set for example i...
let me fix this @joecummings
Hi @D-W- , I am still facing this issue in the `azure-ai-ml 1.7.2`. Any Update on the permanent fix?
Any update on converting RAW meta models to HF??
Noob Question: For converting the `meta_model_0.pt` to GGUF. Do we need to convert to HF and then convert to GGUF (fp16) or can we do it directly from `meta_model_0.pt`?
@calmitchell617 It works !!.. now the PR is merged we can use directly. The correct way to convert Meta to GGUF is (Meta -> HF -> GGUF). @kartikayk I tried...
``` prompt: system: "You are a kind and helpful assistant. Respond to the following request." user: "Write a generation recipe for torchtune." ``` This logic is not working for Llama...
@sgupta1007 as adapter is already merged why we need to give adapter and model weights??
``` model: _component_: torchtune.models.llama3_1.llama3_1_8b checkpointer: _component_: torchtune.utils.FullModelHFCheckpointer checkpoint_dir: /path/output/ checkpoint_files: [ hf_model_0001_0.pt, hf_model_0002_0.pt, hf_model_0003_0.pt, hf_model_0004_0.pt ] output_dir: /path/output/ model_type: LLAMA3 device: cuda dtype: bf16 seed: 1234 # Tokenizer arguments tokenizer:...