GLiNER icon indicating copy to clipboard operation
GLiNER copied to clipboard

"missing keys in the checkpoint model" error while fine-tuning the model on custom data

Open AnjaliSetiya opened this issue 10 months ago • 1 comments

I'm trying to finetune gliner_multi-v2.1 on custom data

model = GLiNER.from_pretrained("urchade/gliner_multi-v2.1") 
data_collator = DataCollator(model.config, data_processor=model.data_processor, prepare_labels=True)


training_args = TrainingArguments(
    seed=42,
    output_dir="models",
    learning_rate=2.9e-6,     # Keep this as it's good
    weight_decay=0.0014,      # Increase from 0.05 to 0.08 for more regularization
    lr_scheduler_type="cosine",
    warmup_ratio=0.167,       # Keep this
    per_device_train_batch_size=2,  # Increase from 2 to 4 for better stability
    per_device_eval_batch_size=2,
    num_train_epochs=2,     # Keep this
    eval_strategy="epoch",  # Change from "no" to "steps" to monitor validation
    #eval_steps=20,         # Add evaluation steps
    save_strategy="epoch",  # Change from "no" to "steps"
    #save_steps=20,
    dataloader_num_workers=4,
    use_cpu=False,
    report_to="none",
    load_best_model_at_end=True,
    logging_dir="./logs",
    logging_steps=10,
    disable_tqdm=False,
    save_total_limit=1,
    focal_loss_alpha=0.6980,  # Keep this
    focal_loss_gamma=1.62    # Keep this
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=test_dataset,
    tokenizer=model.data_processor.transformer_tokenizer,
    data_collator=data_collator,

)



trainer.train() 


`

I keep these warning as below:

"There were missing keys in the checkpoint model loaded: ['model.token_rep_layer.bert_layer.model.embeddings.word_embeddings.weight', 'model.token_rep_layer.bert_layer.model.embeddings.LayerNorm.weight', 'model.token_rep_layer.bert_layer.model.embeddings.LayerNorm.bias', 'model.token_rep_layer.bert_layer.model.encoder.layer.0.attention.self.query_proj.weight', 'model.token_rep_layer.bert_layer.model.encoder.layer.0.attention.self.query_proj.bias', 'model.token_rep_layer.bert_layer.model.encoder.layer.0.attention.self.key_proj.weight', 'model.token_rep_layer.bert_layer.model.encoder.layer.0.attention.self.key_proj.bias', 'model.token_rep_layer.bert_layer.model.encoder.layer.0.attention.self.value_proj.weight', 'model.token_rep_layer.bert_layer.model.encoder.layer.0.attention.self.value_proj.bias', 'model.token_rep_layer.bert_layer.model.encoder.layer.0.attention.output.dense.weight', 'model.token_rep_layer.bert_layer.model.encoder.layer.0.attention.output.dense.bias', ....

I want to know where am I going wrong ? Also I notice that config.json file doesn't exist inead gliner_config.json exist does that effect the fine-tuning. Gliner version: 0.2.17

AnjaliSetiya avatar Mar 25 '25 09:03 AnjaliSetiya

For now the way I am doing this is to set load_best_model_at_end=False. At least this works for now. However, this implies one is loading the last epoch model, and not necessarily the best model. One can load the best model manually after the whole training. However, it would be nice if authors can shed light to the issue.

saist1993 avatar May 02 '25 16:05 saist1993