Refactor perceptual loss so that it can be used offline (regardless of the network_type)
I am trying to instance perceptual loss in an environment where there would not be access to the internet. With torchvision models, I typically instance the network with pretrained = False and then manually load the weights I have saved somewhere local previously. With the PerceptualLoss class it is difficult to do so; whereas some network_types seem to have this functionality (e.g. resnet50 has a pretrained=False) types like medicalnet seem to need to access the internet all the time for (1) defining the network (resnet.py is downloaded from the hub) (2) getting and loading the weights.
Describe the solution you'd like
- Standardisation of how the different networks are instanced / defined
- Unique pretrained parameter or pretrained_path allowing users to load the weigths from an alternative local path.
Describe alternatives you've considered Hardcode perceptual_loss to allow for this for the specific network_type I am using
The desired behaviour can be achieved by:
- Pre-downloading repo and model checkpoints (the easiest way is to query the perceptual loss once with connection to the internet) while having set the torch.hub cache directory with torch.hub.set_dir().
- Then when in machine without connection to the internet, you can copy the repository to a desired location and then set the torch.hub cache directory to that. Perceptual loss should not try to download it if the files are already there.
There is a persisting issue still in terms of the way models are defined and loading depends entirely on the network_type parameter (e.g. medicalnet pulls a git repo, lpips pulls from their repo etc.). It would be nice to homogenise this.