Bert-Multi-Label-Text-Classification icon indicating copy to clipboard operation
Bert-Multi-Label-Text-Classification copied to clipboard

RuntimeError: The expanded size of the tensor (9068) must match the existing size (453400) at non-singleton dimension 1. Target sizes: [5, 9068]. Tensor sizes: [1, 453400]

Open aby005 opened this issue 5 years ago • 1 comments

Traceback (most recent call last): File "run_bert.py", line 222, in <module> main() File "run_bert.py", line 215, in main run_train(args) File "run_bert.py", line 120, in run_train trainer.train(train_data=train_dataloader, valid_data=valid_dataloader) File "C:\MyDrive\PythonWs\NextGenWs\harshitha\Bert\IonePatient-our model\pybert\train\trainer.py", line 153, in train train_log = self.train_epoch(train_data) File "C:\MyDrive\PythonWs\NextGenWs\harshitha\Bert\IonePatient-our model\pybert\train\trainer.py", line 132, in train_epoch metric(logits=self.outputs, target=self.targets) File "C:\MyDrive\PythonWs\NextGenWs\harshitha\Bert\IonePatient-our model\pybert\train\metrics.py", line 47, in __call__ correct = pred.eq(target.view(1, -1).expand_as(pred)) RuntimeError: The expanded size of the tensor (9068) must match the existing size (453400) at non-singleton dimension 1. Target sizes: [5, 9068]. Tensor sizes: [1, 453400]

I tried this with my data and it trained fine. But when I tried to get the Accuracy Metric while training, it threw this error. I passed accuracy metric like this Accuracy(topK=5) in the trainer object in the run_bert.py file. Any idea what is the fix for this?

aby005 avatar Dec 04 '20 12:12 aby005

I met the same trouble. This error occured in codes below:

class Accuracy(Metric):
    '''
    计算准确度
    可以使用topK参数设定计算K准确度
    Examples:
        >>> metric = Accuracy(**)
        >>> for epoch in range(epochs):
        >>>     metric.reset()
        >>>     for batch in batchs:
        >>>         logits = model()
        >>>         metric(logits,target)
        >>>         print(metric.name(),metric.value())
    '''
    def __init__(self,topK):
        super(Accuracy,self).__init__()
        self.topK = topK
        self.reset()

    def __call__(self, logits, target):
        _, pred = logits.topk(self.topK, 1, True, True)
        pred = pred.t()
        correct = pred.eq(target.view(1, -1).expand_as(pred))
        self.correct_k = correct[:self.topK].view(-1).float().sum(0)
        self.total = target.size(0)

    def reset(self):
        self.correct_k = 0
        self.total = 0

    def value(self):
        return float(self.correct_k)  / self.total

    def name(self):
        return 'accuracy'

BarryRun avatar Dec 10 '20 07:12 BarryRun