recommenders icon indicating copy to clipboard operation
recommenders copied to clipboard

[Question]Why are model.evaluate and model.fit showing two different total_loss for same validation data?

Open AzizIlyosov opened this issue 4 years ago • 0 comments

Following two tower model tutorial I made my model . Then I trained it one epoch and evaluated model


cached_train = train.shuffle(100_000).batch(4096)
cached_test = test.batch(4096).cache()

model = Model( layer_sizes = [    32 ], use_context=False)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.001))
history = model.fit(cached_train, validation_data = cached_test, validation_freq=1,
                    callbacks=[tensorboard_callback],
                    epochs=1)

this model has following log (val_total_loss is less than training loss) :

684/684 [==============================] - 195s 284ms/step - factorized_top_k/top_1_categorical_accuracy: 1.0150e-04 - factorized_top_k/top_5_categorical_accuracy: 0.0485 - factorized_top_k/top_10_categorical_accuracy: 0.0708 - factorized_top_k/top_50_categorical_accuracy: 0.1164 - factorized_top_k/top_100_categorical_accuracy: 0.1404 - loss: 33958.1878 - regularization_loss: 0.0000e+00 - total_loss: 33958.1878 - val_factorized_top_k/top_1_categorical_accuracy: 0.0078 - val_factorized_top_k/top_5_categorical_accuracy: 0.0363 - val_factorized_top_k/top_10_categorical_accuracy: 0.0595 - val_factorized_top_k/top_50_categorical_accuracy: 0.1008 - val_factorized_top_k/top_100_categorical_accuracy: 0.1185 - val_loss: 25582.6953 - val_regularization_loss: 0.0000e+00 - val_total_loss: 25582.6953

Then I evaluated each training and test data :

train_accuracy = model.evaluate(
    cached_train, return_dict=True)

684/684 [==============================] - 160s 232ms/step - factorized_top_k/top_1_categorical_accuracy: 0.0264 - factorized_top_k/top_5_categorical_accuracy: 0.1087 - factorized_top_k/top_10_categorical_accuracy: 0.1592 - factorized_top_k/top_50_categorical_accuracy: 0.2452 - factorized_top_k/top_100_categorical_accuracy: 0.2802 - loss: 33922.1529 - regularization_loss: 0.0000e+00 - total_loss: 33922.1529


test_accuracy = model.evaluate(
    cached_test, return_dict=True) 

171/171 [==============================] - 34s 197ms/step - factorized_top_k/top_1_categorical_accuracy: 0.0078 - factorized_top_k/top_5_categorical_accuracy: 0.0363 - factorized_top_k/top_10_categorical_accuracy: 0.0595 - factorized_top_k/top_50_categorical_accuracy: 0.1008 - factorized_top_k/top_100_categorical_accuracy: 0.1185 - loss: 33966.4893 - regularization_loss: 0.0000e+00 - total_loss: 33966.4893

As you can see here, total_loss in model.evaluate(cached_test) different from value when I got by mode.fit

Can somebody explain this ?

AzizIlyosov avatar Dec 21 '21 09:12 AzizIlyosov