Update torch requirement from ~=1.13.1 to ~=2.0.1
You can trigger a rebase of this PR by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebasewill rebase this PR -
@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it -
@dependabot mergewill merge this PR after your CI passes on it -
@dependabot squash and mergewill squash and merge this PR after your CI passes on it -
@dependabot cancel mergewill cancel a previously requested merge and block automerging -
@dependabot reopenwill reopen this PR if it is closed -
@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
We should also upgrade to the latest template: https://github.com/ashleve/lightning-hydra-template/compare/v1.4.0...v2.0.2
Notable changes
-
pytorch_lightning->lightning.pytorch - use
lightning.fabricfor TPU - upgrade to hydra 1.3
- upgrade toolkits in
pre-commit - config
datamodule->data - add a
cputrainer config - move
src.taskstosrc - stop exporting extra log after a task exception.
- add
aimas a logger - split
src.utils.utilsinto several .py files - update
.gitignorefor theaimlogger
Changes in pytorch-lightning ~= 1.6.5 -> lightning ~= 2.0.2.
-
import pytorch_lightning as pl->from lightning import pytorch as pl -
LightningModule:training_epoch_end(self, outputs)->on_train_epoch_end(self). -
LightningModule: removetraining_step_end() -
LightningModule: change argumentsconfigure_gradient_clipping()
lightning 2.0.2 depends on torchmetrics<2.0 and >=0.7.0.
However, we want to keep torchmetrics == 0.6.0 because mAP is super slow in later versions.
I hope torchmetrics will change the backend of mAP soon in the upcoming release.
Changes in torchmetrics == 0.6.0 -> torchmetrics == 0.11.4
-
Accuracyrequiresnum_classesin arguments. -
torchmetrics.detection.MAP->torchmetrics.detection.mean_ap.MeanAveragePrecision
The reason we used torchmetrics == 0.6.0 is because MeanAveragePrecision is super slow in newer versions. It looks like they're finally going to revert back to the original implementation that uses the COCOapi: https://github.com/Lightning-AI/torchmetrics/issues/1024.
Should this be closed @mzweilin?