GPU configuration recommendation
May I know what is the recommended GPU configuration for running all the VFMs of Visual ChatGPT successfully? I used 4 TiTan XP but failed.
Look at which tools you need to use, checkout this readme https://github.com/microsoft/visual-chatgpt#gpu-memory-usage
Does it run one model at a time (e.g. 6GB) or do all models occupy the memory?
一个模型应该是没问题的,你可以参考readme中关于各个模型所占用gpu内存的大小,以此来选择适合你显存的模型,你可以参考我的Colab版本,我在一个T4 GPU下,用了两个模型T2I和ImageCaption,最后成功运行https://github.com/K-tang-mkv/visual-chatgpt-googlecolab
一个模型应该是没问题的,你可以参考readme中关于各个模型所占用gpu内存的大小,以此来选择适合你显存的模型,你可以参考我的Colab版本,我在一个T4 GPU下,用了两个模型
T2I和ImageCaption,最后成功运行https://github.com/K-tang-mkv/visual-chatgpt-googlecolab
谢谢~我想问的是这个项目是否可以动态加载模型,如果全部占用显存,需要的算力太大了
Hi @RidiculousRonZzz , we have added the FP16 precision and updated the codes in this new version, so the memory is reduced. You can also choose the VFMs you want to load conveniently by using --load.
Thanks~
@RidiculousRonZzz Why I don't see the image that Visual Chat generated?
https://drive.google.com/file/d/1Na3VTKgoKSMpa2FOe6Rojk8QCPHnmMyX/view?usp=sharing