table-transformer icon indicating copy to clipboard operation
table-transformer copied to clipboard

Some questions about finetune with negative samples

Open JIBSN opened this issue 2 years ago • 0 comments

Hi guys, I finetuned the model with a training sets with 1/3 negetive samples. It really worked in differentiate pages with tables and pure words when compared to models trained without neg samples. But I have some questions about the training.

  1. How the loss is calculated for negative samples ? I found these two lines, the bboxes and labels are all zero tensors for neg samples.

    bboxes = torch.empty((0, 4), dtype=torch.float32) labels = torch.empty((0,), dtype=torch.int64)

  2. How neg samples influence the result?

I have just entered the field of artificial intelligence and I may not be familiar with this training strategy(mixing labled samples with no labled when training). I hope someone who understands it can help explain it to me.

Thanks, chao

JIBSN avatar Sep 08 '23 02:09 JIBSN