GeunhyukYouk
GeunhyukYouk
Hi, We do not provide direct testing code for videos. However, there are two ways to address this. Firstly, you can use tools like ffmpeg to convert the video into...
I will close this issue as there has been no further discussion. Please re-open the issue if there are additional comments.
Hi, thanks for your interest in FMA-Net! 1. The provided pretrained weights are trained on the REDS train (excluding 000, 011, 015, 020) and validation sets following previous works. The...
For a fair comparison, we measured the average inference time through 100 independent executions for all compared models. The average runtime of BasicVSR++ was 0.072s, which is consistent with the...
I will close this issue as there has been no further discussion. Please re-open the issue if there are additional comments.
Hello, First of all, thank you for creating such a nice demo! To identify the issue, we tested the reds4 020 sequence based on the Colab demo you provided. Our...
I will close this issue as there has been no further discussion. Please re-open the issue if there are additional comments.
Hi, thank you for your interest in our work! To test [REDS4](https://seungjunnah.github.io/Datasets/reds.html) (180x320 video sequence) in our [environment](https://github.com/KAIST-VICLab/FMA-Net?tab=readme-ov-file#requirements), you'll need about 5GB of GPU memory. Note that this may vary...
I will close this issue as there has been no further discussion. Please re-open the issue if there are additional comments.
Hi, thank you for your interest in FMA-Net. It seems like the problem you're interested in is improving blurry videos in environments where GPU memory is insufficient. First, since 720/1080p...