The minimal computational resources?
Hi, there!
Thanks for your nice work! I would like to know the minimal resources needed to train the overall pipeline of your model. I have 8 NVIDIA 3090 GPUs with 24GB, is it enough?
Hello! We've only done training on the 8 x A100 80G. ControlCap does not have much trainable parameters. By reducing the batch size and increases the gradient_accumulate_steps, it is possible to train ControlCap on 3090 24G.
Hello! We've only done training on the 8 x A100 80G. ControlCap does not have much trainable parameters. By reducing the
batch sizeand increases thegradient_accumulate_steps, it is possible to train ControlCap on 3090 24G.
Thank you for your quick reply. I want to know how to change the gradient_accumulate_steps in the configs or somewhere. I didn't find the specific parameters corresponding to this
您好!我在运行测试脚本eval_vg1.2_densecap.sh测试train_vg1.2.sh得到的权重文件时,即使vg1.2_densecap.yaml中batch_size_eval:1,显存还是不够,请问有别的办法吗?我的实验条件是4*4090(24G),期待您的回复。