Resources required to launch
Hello What is minimum specification to launch (but not train) it on local machine with normal speed? Thank you
Just tested it out.
Resources used when running on an A100 80GB:
- 58 GB of disk storage
- Peak of 94-ish GB of memory (while loading the data onto the GPU. May need further testing to see if it is possible to run on lower hardware specs)
- Just over 40 GB of vRAM used on the GPU
There are people who have gotten it running on 48GB GPUs. I would check out the Discord for more info.
So, the minimum specs would be:
- ~60GB of free disk space
- 95+ GB memory
- 40+ GB vRAM GPU
There are people on the Discord who have managed to run it on multiple GPUs and others have gotten it working on smaller GPUs. I would look into that.
Just tested it out.
Resources used when running on an A100 80GB:
- 58 GB of disk storage
- Peak of 94-ish GB of memory (while loading the data onto the GPU. May need further testing to see if it is possible to run on lower hardware specs)
- Just over 40 GB of vRAM used on the GPU
There are people who have gotten it running on 48GB GPUs. I would check out the Discord for more info.
So, the minimum specs would be:
- ~60GB of free disk space
- 95+ GB memory
- 40+ GB vRAM GPU
Is it possible to use two rtx3090 or 4090 with nvlink?
Just tested it out. Resources used when running on an A100 80GB:
- 58 GB of disk storage
- Peak of 94-ish GB of memory (while loading the data onto the GPU. May need further testing to see if it is possible to run on lower hardware specs)
- Just over 40 GB of vRAM used on the GPU
There are people who have gotten it running on 48GB GPUs. I would check out the Discord for more info. So, the minimum specs would be:
- ~60GB of free disk space
- 95+ GB memory
- 40+ GB vRAM GPU
Is it possible to use two rtx3090 or 4090 with nvlink?
I would look at #20 or the Discord for info on running the model on multiple GPUs
what about training requirements?
@raihan0824 Training requirements are discussed at #26 . The current model is trained on 8x A100 80GB GPUs.