Zhengwentai SUN
Zhengwentai SUN
> https://dev.to/rajshirolkar/fastapi-over-https-for-development-on-windows-2p7d This is also what I am interested in.
You need to register all modules mannully by: ```python from mmpretrain.utils import register_all_modules register_all_modules() ```
Thank you for your response. I am very willing to make a PR for adapting the perceptual loss from mmediting to mmgeneration. However, I have a few issues that need...
Thank you for your suggestions. I have a problem with the returned values. Typically, a loss module in mmgeneration returns one value. However, the `PerceptualLoss` often returns `loss_percep` and `loss_style`....
Hi, thank you for your appreciation of our work. I apologize for the delay in releasing the training code. For immediate reference, you can access the training implementation through [PITI's...
Thank you for your appreciation of our work. However, I do not own this dataset. You should probably contact the corresponding author ([email protected]) to request it.
Hi, Thank you for your appreciation of this repository! Recently, I've received several requests from the community. Unfortunately, I'm quite busy at the moment, but I hope to make these...
Hi, I’m currently available to update this repo. I was wondering if it would be possible to provide model links for CLIP-T and CLIP-I?
Hi, I misunderstood — I thought CLIP-I and CLIP-T referred to different transformer settings. The current release does support calculating CLIP scores for both text-image, text-text, and image-image pairs.
Hi, thank you for your efforts. I am currently upgrading this repo to support CLIP models and their variants from huggingface. Some of the argparse arguments may also be updated....