training_extensions
training_extensions copied to clipboard
`f1_score = self.model.image_metrics.OptimalF1.compute().item()` returns a warning message.
When we use f1_score = self.model.image_metrics.OptimalF1.compute().item() to compute the performance metrics, we get the following warning message when running the tests via python -m pytest tests in external/anomaly directory.
To reproduce do the following:
- Install
anomalibfrom the github repo. -
cd external/anomalib -
python3 -m pytest tests
=================================================== warnings summary ===================================================
../../../../.pyenv/versions/anomalib/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:69
/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:69: DeprecationWarning: `pytorch_lightning.metrics.*` module has been renamed to `torchmetrics.*` and split off to its own package (https://github.com/PyTorchLightning/metrics) since v1.3 and will be removed in v1.5
warnings.warn(*args, **kwargs)
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_cancel_training[anomaly_classification-padim]
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_cancel_training[anomaly_classification-stfpm]
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_ote_train_export_and_optimize[anomaly_classification-padim]
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_ote_train_export_and_optimize[anomaly_classification-stfpm]
/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `ROC` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_cancel_training[anomaly_classification-padim]
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_cancel_training[anomaly_classification-stfpm]
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_ote_train_export_and_optimize[anomaly_classification-padim]
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_ote_train_export_and_optimize[anomaly_classification-stfpm]
/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `PrecisionRecallCurve` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_cancel_training[anomaly_classification-padim]
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_ote_train_export_and_optimize[anomaly_classification-padim]
/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:69: UserWarning: `LightningModule.configure_optimizers` returned `None`, this fit will run with no optimizer
warnings.warn(*args, **kwargs)
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_cancel_training[anomaly_classification-padim]
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_cancel_training[anomaly_classification-stfpm]
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_ote_train_export_and_optimize[anomaly_classification-padim]
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_ote_train_export_and_optimize[anomaly_classification-stfpm]
/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric OptimalF1 was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_cancel_training[anomaly_classification-stfpm]
/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric PrecisionRecallCurve was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_cancel_training[anomaly_classification-stfpm]
/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/_pytest/threadexception.py:75: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-3
Traceback (most recent call last):
File "/home/sakcay/.pyenv/versions/3.8.12/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/sakcay/.pyenv/versions/3.8.12/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/sakcay/projects/training_extensions/external/anomaly/tests/helpers/train.py", line 149, in train
raise exception
File "/home/sakcay/projects/training_extensions/external/anomaly/tests/helpers/train.py", line 141, in train
self.base_task.train(
File "/home/sakcay/projects/training_extensions/external/anomaly/anomaly_classification/task.py", line 140, in train
self.save_model(output_model)
File "/home/sakcay/projects/training_extensions/external/anomaly/anomaly_classification/task.py", line 159, in save_model
f1_score = self.model.image_metrics.OptimalF1.compute().item()
File "/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/torchmetrics/metric.py", line 372, in wrapped_func
self._computed = compute(*args, **kwargs)
File "/home/sakcay/projects/anomalib/anomalib/core/metrics/optimal_f1.py", line 38, in compute
precision, recall, thresholds = self.precision_recall_curve.compute()
File "/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/torchmetrics/metric.py", line 372, in wrapped_func
self._computed = compute(*args, **kwargs)
File "/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/torchmetrics/classification/precision_recall_curve.py", line 145, in compute
preds = dim_zero_cat(self.preds)
File "/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/torchmetrics/utilities/data.py", line 29, in dim_zero_cat
raise ValueError("No samples to concatenate")
ValueError: No samples to concatenate
warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg))
tests/test_ote_anomaly_classification.py::TestAnomalyClassification::test_ote_train_export_and_optimize[anomaly_classification-padim]
/home/sakcay/.pyenv/versions/anomalib/lib/python3.8/site-packages/kornia/filters/filter.py:23: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if kernel_size[i] % 2 == 0:
-- Docs: https://docs.pytest.org/en/stable/warnings.html
====================================== 8 passed, 18 warnings in 203.49s (0:03:23) ======================================
+1 I end up crashing with
File "/home/gsd/anaconda3/envs/anomalib_env/lib/python3.8/site-packages/torchmetrics/classification/roc.py", line 157, in compute
preds = torch.cat(self.preds, dim=0)
NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors, or that you (the operator writer) forgot to register a fallback function. Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize].