Add the logging of dict metrics
🚀 Feature
The request would be to add the logging of Mapping metrics in the logging framework.
The ignite.metrics.Metric class supports the use of Mapping metrics as we can see below. However, the BaseOutputHandler does not support dictionary metrics and warns about them.
https://github.com/pytorch/ignite/blob/edd5025e7d597a6e5fe45c5173487c37d3f9d1df/ignite/metrics/metric.py#L488-L494
Ones can simply ask the logger to report the metric names produced by the Metric directly as those will be store in the metric state no matter which name was used for the metric. But I feel like it breaks the kind of "namespaces" that seems to be used in loggers.
I would find it practical if the logger could handle mappings and log their content as sub values of the metric itself.
This could be achieved by editing the BaseOutputHandler which would fix this issue in any existing logger. There should not be any side effect as the logger was warning users if using mappings, so I imagine very few users were already having a mapping metrics logged that would now appear if they upgraded to a version with this feature.
@nowtryz thanks for the feature request. Can you please provide a code snippet with an example of what you would like to have ?
Ones can simply ask the logger to report the metric names produced by the Metric directly as those will be store in the metric state no matter which name was used for the metric.
There is a keyword "all" in OutputHandlers, e.g. TensorBoard: https://pytorch.org/ignite/generated/ignite.handlers.tensorboard_logger.html#ignite.handlers.tensorboard_logger.OutputHandler :
metric_names (Optional[List[str]]) – list of metric names to plot or a string “all” to plot all available metrics.
I would find it practical if the logger could handle mappings and log their content as sub values of the metric itself.
Yes, this makes sense.
So, if I understand correctly, you would like a use-case like this ?
evaluator.state.metrics = {
"scalar_value": 123,
"dict_value": {
"a": 111,
"b": 222,
}
}
handler = OutputHandler(
tag="validation",
metric_names="all",
)
handler(evaluator, tb_logger, event_name=Events.EPOCH_COMPLETED)
# Behind it would call
# tb_logger.writer.add_scalar('"scalar_value", 123, global_step)
# tb_logger.writer.add_scalar('"dict_value/a", 111, global_step)
# tb_logger.writer.add_scalar('"dict_value/b", 222, global_step)
Hi @vfdev-5,
Yes exactly, the code snippet you provided is a good example. Another example would be the following:
evaluator.state.metrics = ... # kept unchanged
handler = OutputHandler(
tag="validation",
metric_names="dict_value",
)
handler(evaluator, tb_logger, event_name=Events.EPOCH_COMPLETED)
# Behind it would call
# tb_logger.writer.add_scalar('"dict_value/a", 111, global_step)
# tb_logger.writer.add_scalar('"dict_value/b", 222, global_step)
I believe structlog can effectively handle mapped logging in this case. Instead of manually iterating over the metrics, structlog allows logging mappings directly. For example:
logger.info("Metrics logged", event=event_name, metrics=engine.state.metrics)
Since structlog natively supports structured logging, this approach ensures dictionary-based metrics are logged in a clean and readable format without additional processing.
If I'm not wrong, feel free to assign me this issue—I’d be happy to work on it! 🚀
@Spiritedswordsman structlog is not a python built-in module and is an additional dependency to install. We do not add new deps to ignite without a strong reason.
On the other hand, original issue is about logging dicts with tensorboard-like loggers (or experiment tracking systems).
I would also like to see this feature implemented +1 :)