feiyangsuo

Results 6 comments of feiyangsuo

> Hi @feiyangsuo I was also coming across the positional embedding usage, they also use relative positional bias: > > https://github.com/microsoft/Swin-Transformer/blob/eed077f68e0386e8cdff2e1981492699d9c190c0/models/swin_transformer.py#L89 > > Which is a learnable matrix of the...

@JiaRenChang Hi. Thanks for your answer and script. Yet I still got disparity map like before. ![disparity](https://user-images.githubusercontent.com/47778268/64594315-5e51ce80-d3e2-11e9-9dda-81475f1eb771.png) I also tried cropping off the black brink in the images, but it's...

Hmmm... Where to get the training examples?

``` from pytorch_lightning.callbacks import ModelCheckpoint checkpoint_callback = ModelCheckpoint( dirpath={your work dir}, every_n_train_steps={after how many gradient descent do save one model weights file}, save_weights_only=False ) ``` and replace `callbacks=[logger]` with `callbacks=[logger,...

> same OOM problem here when finetuning 7B models on a single A100-80G. The error log is exactly the same as @Williamsunsir . Theoretically, finetuning a 7B model takes 14G...

> @feiyangsuo - afaik the code uses a standard SD 1.5 model for it's stuff... sure, so I suppose the magic lies in the training method (or training data perhaps)?