[Question] How to properly visualize the model structure as well as save model?
I tried tf.keras.utils.plot_model
But it seems not to work.. only print the model name, is it because we did not use Functional API here?
Also for saving the model, model.save() works but later the weights and summary can not be called even if I do loaded_model(inputs) first.
I am also a little bit confused of Model vs. Layer class. I see all tutorials here use Model class for both query and candidate tower, but -> https://www.tensorflow.org/guide/keras/custom_layers_and_models#the_model_class . It says if you do not need to call fit (which is the case for query and candidate tower), consider use layer class
I have the same question as well. I couldn't find tfrs examples that showcase how to save and load a seralized model. I keep running into problems where the input pipeline graph cannot be seralized. Not sure how practical this is if we cannot port the tfrs model from one env to another.
+1 here I think this seems quite a frequent issue.
@maciejkula could you please help out here? I think a guide would save lots of work :)
I found that if I implement the get_config() method then I was able to serialize the model: https://www.tensorflow.org/guide/keras/custom_layers_and_models#you_can_optionally_enable_serialization_on_your_layers . +1 to having a more explicit / dedicated guide though.
@chengyineng38 thanks! I will look it up. I end up saving the checkpoints (save_weights) and load_weights later. it seems the TFRS model is a "custom"/"sub-class" model and I am referring here how to save sub-class model, seems aligned with what you posted.
+1 for explicit guide
There are two ways of saving model state in TensorFlow:
- Using checkpoints for saving model variables during training.
- Using
tf.saved_model.save/loadto store and load models for serving.
Here are the Keras docs for checkpointing (1). For (2), you can call tf.saved_model.save on your trained Model instance.
There are two ways of saving model state in TensorFlow:
- Using checkpoints for saving model variables during training.
- Using
tf.saved_model.save/loadto store and load models for serving.Here are the Keras docs for checkpointing (1). For (2), you can call
tf.saved_model.saveon your trainedModelinstance.
Thanks! I did not explore this for a while and kept using save_weight, but I remember last time I had a struggle with saved_model... (might be missing signature or sth). feels there are some extra steps to save it for the Custom model..
Below is how I save & load the model/index
scann = tfrs.layers.factorized_top_k.ScaNN(
model.query_model,
num_reordering_candidates=100
)
scann.index_from_dataset(
products.batch(128).map(
lambda x: (x['product_token'] + ': ' + x['product_name'], # idx identifier
model.product_model({ # transform embeddings
"product_name": x['product_name'],
"product_description": x["product_description"]
}))
)
)
path = './your_model_name'
tf.saved_model.save(
scann,
path,
options=tf.saved_model.SaveOptions(namespace_whitelist=["Scann"])
)
loaded = tf.saved_model.load(path)
_, titles = loaded({
"query_text": np.array(["hello world"])
})
for title in titles[0]:
print(title)