MiDaS icon indicating copy to clipboard operation
MiDaS copied to clipboard

About the encoder

Open mathmax12 opened this issue 5 years ago • 1 comments

_make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained) I read the code and find the work is using the resnext101_wsl as the encoder. Are the weights for the encoder is also pre-trained from resnext101_wsl (which are fixed values during training) or you use the same architecture and retrained the encoder and decoder together.

Thanks

mathmax12 avatar Nov 25 '20 18:11 mathmax12

We initialize the encoder with the pre-trained resnext101_wsl and then train encoder and decoder jointly (with a lower learning rate for the encoder). The paper has more details on this.

ranftlr avatar Nov 26 '20 09:11 ranftlr