LoRA icon indicating copy to clipboard operation
LoRA copied to clipboard

Not able to reproduce the scores using provided checkpoint on NLG tasks

Open ylli0218 opened this issue 2 years ago • 8 comments

Hi, I was able to reproduce the GLUE benchmark results but not the NLG task.

For NLG tasks, I downloaded the checkpoint for GPT2-M and follow the step 2,3,4 in the instructions

https://github.com/microsoft/LoRA/tree/main/examples/NLG

However, the scores were either extremely low, or there were errors during evaluations.

For e2e task I got:

SCORES: BLEU: 0.0000 NIST: 0.0196 METEOR: 0.0034 ROUGE_L: 0.0072 CIDEr: 0.0000

For WebNLG and DART, I see error: Error: test and reference not same length ERROR ON COMPUTING METEOR. MAKE SURE YOU HAVE JAVA INSTALLED GLOBALLY ON YOUR MACHINE. I do have java installed. And the other scores were also low: BLEU BLEU NLTK METEOR chrF++ TER BERT-SCORE P BERT-SCORE R BERT-SCORE F1 BLEURT 0 0 -1 0.11 1.96 0 0 0 -1

Any suggestions? Thank you.

ylli0218 avatar Oct 12 '23 03:10 ylli0218

Hello, I have the same problem, have you solved it please?

1181000705 avatar Oct 21 '23 05:10 1181000705

@1181000705 Yes, partially. The low NLG scores are because somehow the original script did not load the pretrained backbone models. So make sure you are loading the backbone models weights correctly, together with the lora weights, you will get the published scores.

As for the
Error: test and reference not same length ERROR ON COMPUTING METEOR. MAKE SURE YOU HAVE JAVA INSTALLED GLOBALLY ON YOUR MACHINE. No idea yet.

ylli0218 avatar Oct 22 '23 20:10 ylli0218

The low NLG scores are because somehow the original script did not load the pretrained backbone models.

I see. Would be great if could make a PR to fix that!

ERROR ON COMPUTING METEOR. MAKE SURE YOU HAVE JAVA INSTALLED GLOBALLY ON YOUR MACHINE.

Seems like a dependency issue.

edwardjhu avatar Oct 29 '23 00:10 edwardjhu

The low NLG scores are because somehow the original script did not load the pretrained backbone models.

The error is here: in the finetuning script examples/NLG/src/gpt2_ft.py line 256

if args.rank == 0:
        model_path = os.path.join(args.work_dir, f'model.{train_step}.pt')
        print('saving checkpoint', model_path)
        torch.save({'model_state_dict': model.state_dict()}, model_path) 
    distributed_sync(args)
    return train_step

The saved checkpoint contains the model.state_dict(), where the backbone gpt2 parameters are started with a prefix transformer.

However, in examples/NLG/src/model.py, line 448:

self.transformer.load_state_dict(state_dict, strict=False)

This transformer.load_state_dict method should accept parameters WITHOUT the prefix transformer. So when loading finetuned model from checkpoint, only LoRA weights are loaded

I suggest to slightly modify the weight loading function like this:

        for key in state_dict_tmp:
            new_key = None
            if key.endswith(".g"):
                new_key = key[:-2] + ".weight"
            elif key.endswith(".b"):
                new_key = key[:-2] + ".bias"
            elif key.endswith(".w"):
                new_key = key[:-2] + ".weight"
            
            if key.startswith("module.transformer."):
                new_key = key[len("module.transformer."):]

            if key.startswith("transformer."):             # Add 2 lines here to delete the prefix "transformer."
                new_key = key[len("transformer."):]

            if new_key:
                old_keys.append(key)
                new_keys.append(new_key)

EdwardIX avatar Dec 20 '23 07:12 EdwardIX

ERROR ON COMPUTING METEOR. MAKE SURE YOU HAVE JAVA INSTALLED GLOBALLY ON YOUR MACHINE.

same error with webnlg, how to fix it

RayCyder avatar May 23 '24 00:05 RayCyder