yuchen1984
yuchen1984
Same problem, what's the minimal GPU memory requirement for running the inference? I'm on 11GB GPU RAM. I also tried hacking the imSize to e.g 384x480 or 480x600 to work...
Trying spatial SR x2 + temporal SR x2 of a 12 second 512x768 videos on 1x A100 Each step takes ~5 min, and running fast mode of 15 steps takes...
pro-7b inference is runnable on 4090 24GB if changing "parallel_size" from 16 to a smaller number e.g. 4, in generation_inference.py NB: need to install torch 2.1.0 instead of torch 2.0.1...