samuelbroscheit
samuelbroscheit
Can you review again?
I adressed your reviews. I ended up doing a small revision of EvaluationJob: EvaluationJob.run() is implemented now in EvaluationJob which does standard stuff that has to be done for every...
> Your last changes raise the question of whether all training jobs handle indexes correctly when used in an "eval only" setting. E.g., for KvsAll, what are the labels being...
> For the other jobs it's clearer, I guess, but I feel taht loss can turn out to be misleading. > > I think we should not make the training_loss...
> For negative sampling, the filtering split (or splits?) can already be specified, I think. Yes but at the moment it is automagically the split defined as `train.split` if not...
> Why is it clear for KvsAll? If done like you state, the losses are not comparable between train and valid (valid loss will be pretty much like 1vsAll because...
> > > For negative sampling, the filtering split (or splits?) can already be specified, I think. > > > > > > Yes but at the moment it is...
How would you win for labels [0,0,1] with prediction [1,1,1] vs prediction [0,0,1] with BCE?
I think usually you want the same loss that is used during training.
Or just get the collate func from the Trainer and join it with the eval collate func? Shouldn't be that difficult. ``` def get_collate_func(trainer_collate_func): def my_collate_func(batch): my_result = doing stuff...