MisconfigurationException: `train_dataloader` must be implemented to be used with the Lightning Trainer
i am trying to run train a model using the following command python train.py --model_name RN50 --folder ArchDaily --batch_size 512 --accelerator cuda
i get the above error:
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/hooks.py", line 485, in train_dataloader
raise MisconfigurationException("train_dataloader must be implemented to be used with the Lightning Trainer")
pytorch_lightning.utilities.exceptions.MisconfigurationException: train_dataloader must be implemented to be used with the Lightning Trainer
grateful for any assistance
and here is full messages log
Using 16bit native Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/configuration_validator.py:119: PossibleUserWarning: You defined a validation_step but have no val_dataloader. Skipping val loop.
category=PossibleUserWarning,
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Traceback (most recent call last):
File "train.py", line 31, in train_dataloader must be implemented to be used with the Lightning Trainer")
pytorch_lightning.utilities.exceptions.MisconfigurationException: train_dataloader must be implemented to be used with the Lightning Trainer
I had the same issue. It looks like the num_training_steps function can't access the DataLoader for some reason. I circumvented the problem by explicitly returning the data length and data batch size in the setup function to plug them into num_training_steps.
I soved this problem. I downgraded the pytorch-lightning version. my pytorch-lightning version is 1.4.9
you can change wrapper.py. dataset = self.train_dataloader() to dataset = self.trainer.datamodule.train_dataloader()