Unexpected Symbolic tensor in Tensorflow Probability tensor_coercible object (mixture layer)
Complex interpretation of Tensorflow Probability tensor_coercible object I have a tensorflow model (keras sequential) that ends with a Tensorflow Probability (TFP) mixture layer. My goal is to fit this network with a custom loss function. The unexpected behaviour is the following:
When I pass a custom loss like this:
def wass_loss(y_true, y_pred):
print(type(y_pred))
print(type(y_true))
# Do multiple 'tf' operations, one of them, sample the _TensorCoercible
return ...
It prints:
<class 'tensorflow_probability.python.layers.internal.distribution_tensor_coercible._TensorCoercible'> <class 'tensorflow.python.framework.ops.SymbolicTensor'>
Intepreting each argument correctly and results are ok /coherent.
Now, when I compile this same network with another loss func. and use the wass_loss above as metric.
def another_loss(y_true, y_pred):
return -y_pred.log_prob(y_true)
my_model.compile(optimizer = kr.optimizers.Adam(1e-3),
loss = another_loss,
metrics = [wass_loss,])
my_model.fit(...)
I get:
<class 'tensorflow.python.framework.ops.SymbolicTensor'> <class 'tensorflow.python.framework.ops.SymbolicTensor'>
Which is unexpected, I would expect the Tensor Coercible from TFP to still be a Tensor Coercible. Yet, looks like that after the new loss (which might be computed first), it is already a Symbolic Tensor Why is it happening? What am I missing?
Thanks a lot!
There's call of tf.convert_to_tensor that does it, somehow in it varies between versions of library (keras, vs tf_keras)