FMA-Net
FMA-Net copied to clipboard
[CVPR 2024 Oral] Official repository of FMA-Net
hi, i see the evaluation/testing code but how can i run the pretrained model on a single video?
Hi, thank you for your great work! I have a couple of questions: 1. Are the experimental results' pretrained weights trained on the REDS train set (excluding Clips 000, 011,...
In your paper you wrote the inference time with Basic VSR++ is 0.072 seconds and i wonder how you get these values? That would lead in 13.9FPS and i never...
Hello, and thank you for sharing FMA-Ne code. I have been waiting for the model for a while, as I personally love to apply ML tools to ancient historical videos....
First, thanks a lot for sharing the model weights and the code. We did some tests and it works pretty well on certain types of video. But in our use...
Hi, thank you for your great work! How much vram do I need when testing?
when I run the main.py by "python main.py --test --config_path experiment.cfg",This question arises,How can i get the text,thank you
Thanks for your work! I load your pretrained R&D ckpt and finetune the model on my dataset. But after 40 epochs, the model seems to collapse. The Recon Loss and...
Could you tell us how to calculate the Average Motion Magnitude of optical flow for obtaining the PSNR/ tOF results in Tab. 3 in the main paper? Besides, how to...
I found that when num_seq=10 and num_flow=9, the measured parameter count is 10.41M; when num_seq=3 and num_flow=2, the measured parameter count is 9.37M. But the parameter count mentioned in the...