BRM10213
BRM10213
Thank you for your response @abhishekkrthakur I used your merge_adapter over my fine-tuning model and it created these new files: ``` model-00001-of-00014.safetensors model-00002-of-00014.safetensors .... model-00014-of-00014.safetensors tokenizer_config.json model.safetensors.index.json special_tokens_map.json added_tokens.json tokenizer.json...
I move my adapter_config.json file in other path location and this is my new tree directory of the merge fine-tuning model ``` tokenizer.model training_args.bin requirements.txt handler.py training_params.json checkpoint-26 README.md adapter_model.safetensors...
Thank you @abhishekkrthakur, the test finally works! This is my test code: ``` python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "/opt/huggingface/hub/CodeLlama-34b-Instruct-001" tokenizer = AutoTokenizer.from_pretrained(model_path) from_pretrained_kwargs = {...
No, this issue is already resolved. I just want to figure out way my train model add these `` characters. I think my data set has wrong encoding or there...
In previous versions of autotrain and transformers, I utilized a different dataset with autotrain without encountering any issues. I didn't even need to merge the fine-tuning model; I could simply...
I load my first data_set that I used in my first training with the current version of autotrain and I get the same error ``` RuntimeError: Error(s) in loading state_dict...
I compare the base model and fine-tuning model and I found this: ``` diff /opt/huggingface/hub/models--codellama--CodeLlama-34b-Instruct-hf/snapshots/cebb11eacbeecb9189e910d57a8faeadb949978f/special_tokens_map.json CodeLlama-34b-Instruct-Sybase/special_tokens_map.json 1a2,7 > "additional_special_tokens": [ > "▁", > "▁", > "▁", > "▁" > ],...