DAG4MIA
DAG4MIA copied to clipboard
The exception of consistency loss
During the training process, the consistency loss is getting bigger and bigger, and the other losses are getting smaller and smaller. Is it normal? Is it what we expect?
consistency_loss = 0
consistency_weight = get_current_consistency_weight(iter_num//len(loader_train_s), max_epoch)
consistency_dist = consistency_criterion(predout_t[train_params['labeled_bs']:], ema_output) #(batch, 3, 256, 256)
consistency_dist = torch.mean(consistency_dist)
consistency_loss = consistency_dist * consistency_weight
🐱👤
I've encountered the same issue where the consistency loss increases while other losses decrease during training. Could anyone provide some insights or suggestions on this?