mPLUG-2
mPLUG-2 copied to clipboard
Does the inference need to run on 8 A100?
I tried to run inference for video captioning on 1 A100 but got OOM issues. Does the inference need to run on 8 A100 or can it run one one A100? Thanks in advance.
Hello @Roleone123 and @MAGAer13 Is it possible to do inference of mPLUG-2 model for video captioning on multiple GPUs?