BERT-for-RRC-ABSA icon indicating copy to clipboard operation
BERT-for-RRC-ABSA copied to clipboard

Fine-tuned models for AE and ASC

Open vahidsj opened this issue 4 years ago • 1 comments

Hi,

As you mentioned in this issue: #17 I have used the transformer-version code base to fine-tune the models. Here is the result that I got after running bash script/run_ft.sh, e.g. for AE_laptop_14 and ASC_laptop_14:

ae_laptop_14 BERT f1 = 0.7863 BERT_Review f1 = 0.8381 BERT-DK f1 = 0.8302 BERT-XD_Review f1 = 0.8374 BERT-PT f1 = 0.8358

asc_laptop_14 BERT acc = 73.9028 mf1 = 70.7175 pos_f1 = 0.8672 neg_f1 = 0.6798 neu_f1 = 0.4436 BERT_Review acc = 78.2445 mf1 = 75.1642 pos_f1 = 0.8824 neg_f1 = 0.7473 neu_f1 = 0.5526 BERT-DK acc = 75.3918 mf1 = 71.6495 pos_f1 = 0.8688 neg_f1 = 0.7167 neu_f1 = 0.4880 BERT-XD_Review acc = 78.7304 mf1 = 75.4337 pos_f1 = 0.8860 neg_f1 = 0.7453 neu_f1 = 0.5824 BERT-PT acc = 77.2100 mf1 = 74.2477 pos_f1 = 0.8798 neg_f1 = 0.7283 neu_f1 = 0.5484

Now, I have two questions: I need to use fine-tuned models for two end-tasks (AE & ASC) for two new datasets (Amazon reviews >> Cellphone and Laptop )

  • It seems that the script doesn't save any model for the end tasks. Am I right?
  • If it's true, How can I use models for my own datasets to predict the aspect and polarity?

Best, Vahid

vahidsj avatar Mar 31 '21 13:03 vahidsj

I think I could find the issue. In the config.py, you assigned remove_model = True

class TrainConfig(Config):
    def __init__(self, 
        max_seq_length=128,
        train_batch_size=32,
        learning_rate=3e-5,
        run=10,
        eval_batch_size=32,
        remove_model=True,
        num_train_epochs=8,
        fp16=True,
        do_lower_case=True,
        adam_epsilon=1e-8,
        fp16_opt_level="O1",
        max_grad_norm=1.0,
        weight_decay=0.0,
        warmup_steps=0,
        no_cuda=False,
        n_gpu=1,
        device=0,
        **kwargs
    ):

vahidsj avatar Apr 01 '21 20:04 vahidsj