Giles Billenness

Results 8 comments of Giles Billenness

Could you review please @ancientmooner or add someone to review?

Was able to load a checkpoint somewhat by changing the loading code: 1. adding the lines ``` def load_pretrained(config, model, logger): logger.info(f"==============> Loading weight {config.MODEL.PRETRAINED} for fine-tuning......") checkpoint = torch.load(config.MODEL.PRETRAINED,...

I also found after these changes that when loading pre-trained models from this repo (swin_tiny_patch4_window7_224.pth) such as from ImageNet a similar log is shown ``` _IncompatibleKeys(missing_keys=['layers.0.blocks.0.attn.relative_position_index', 'layers.0.blocks.1.attn_mask', 'layers.0.blocks.1.attn.relative_position_index', 'layers.1.blocks.0.attn.relative_position_index', 'layers.1.blocks.1.attn_mask',...

Ah, these are all re init'ed anyway and so it doesn't matter. I might open a PR to allow for use of MoBY pre-trained models.

Having this issue running https://github.com/SwinTransformer/Transformer-SSL on SWIN-T, using a 3090, with precompiled apex from `pip install apex -f https://dl.fbaipublicfiles.com/vissl/packaging/apexwheels/py37_cu113_pyt11/download.html` and `conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch` The fix...

This looks like a good first issue to start contributing to this project. I joined the open-source programme @ IBM and have @rafvasq mentoring me for contributions to this project....

Hey, thanks for the update. I will be taking a look at this again given the hardware support

Yeah I saw this as well, BILINEAR is less effective as well, although hopefully, it wouldn't have much of an effect.