Jona Ballé

Results 12 comments of Jona Ballé

The mentioned commit didn't remove functionality from the `TensorBoard` callback, but from `Model`. The problem with these summaries was that they would be written unconditionally after each training step, regardless...

One thing to note is also that these logs were *not* actual batch-level summaries, which was a reason to remove them. They are accumulated from the beginning of the epoch...

It looks like the code you linked to could act as a replacement for the example I gave above, since it takes care of entering a writer context when training...

We could consider to make the default `Model.train_step` implementation call `tf.summary.scalar` on every value that is added to a metric. Then the batch summaries would be correct.

Ok, I agree, it isn't a great idea to scatter this functionality across `Model` and the callback. The implementation is opaque and not easy to understand, and that was what...

The mentioned commit didn't remove functionality from the `TensorBoard` callback, but from `Model`. The problem with these summaries was that they would be written unconditionally after each training step, regardless...

One thing to note is also that these logs were *not* actual batch-level summaries, which was a reason to remove them. They are accumulated from the beginning of the epoch...

It looks like the code you linked to could act as a replacement for the example I gave above, since it takes care of entering a writer context when training...

We could consider to make the default `Model.train_step` implementation call `tf.summary.scalar` on every value that is added to a metric. Then the batch summaries would be correct.

Ok, I agree, it isn't a great idea to scatter this functionality across `Model` and the callback. The implementation is opaque and not easy to understand, and that was what...