how to finetune with gemma model?
how to finetune with gemma model?
I had download gemma-7b-it model from hugging face already, but not find the script that can finetune wiht my own data
help
I had download gemma-7b-it model from hugging face already, but not find the script that can finetune wiht my own data
How to sft with gemma?Can you tell me the sft data format?
I had download gemma-7b-it model from hugging face already, but not find the script that can finetune wiht my own data
How to sft with gemma?Can you tell me the sft data format?
@pengchongjin
Hi there, unfortunately, this repo doesn't provide finetuning features.
Here are a few alternatives that might fit your needs:
- On Gemma model card in Vertex Model Garden, there are a few notebooks which demonstrate how to do finetuning and then deploy to Vertex endpoints.
- On Gemma model card in Kaggle, there are a few notebooks which uses KerasNLP to do finetuning.
- HuggingFace demonstrates how to use TRL to do finetuning in this blog post.
Hope it helps.
@pengchongjin Is it possible to implement a class for fine tuning the model inside this repo similar to what done with llama-recipes?
Are there any tutorials for fine-tuning 7b-it-quant model ?
Hi @aliasneo1
There are a few tutorials that demonstrate fine-tuning the gemma-2b model. You can follow similar procedures to fine-tune the Gemma variant gemma-7b-it.
Here are some resources:
- Fine-tuning Gemma using JAX and Flax.
- Fine-Tuning Gemma Models in Hugging Face with PyTorch on GPU and TPU.