run on single video
hi, i see the evaluation/testing code but how can i run the pretrained model on a single video?
I would also like to know that.
How can we use this model (or our own trained one) to run it on a real video ?
I would also like to know that.
How can we use this model (or our own trained one) to run it on a real video ?
Hi, We do not provide direct testing code for videos. However, there are two ways to address this. Firstly, you can use tools like ffmpeg to convert the video into an image sequence and then utilize the test_custom option we provide. Secondly, you can directly test the video using the restoration_video_inference function in the MMEditing library.
I have converted the video into an image sequence in A40(46G), but it is still out of memory. What params should I change?
I think you need to reduce your resolution. VRAM use absolutely balloons with resolution and frame count
I will close this issue as there has been no further discussion. Please re-open the issue if there are additional comments.