valdesguefa
valdesguefa
i have the same problem
@timothylimyl How would you advise me on which instruction-tuned LLM model to choose
@csunny I've changed the torch version according to the following link: https://pytorch.org/get-started/previous-versions/ it doesn't change anything
@csunny how to load the model in 8 bits like in fastchat ?
 @csunny I get this error when I try to run vicuna-13b (Tribbiani) in 32gb ram, 24gb gpu, RTX 3090
@csunny are there any endpoints that can be accessed via an api?
Webui branch
@asRizvi888 @benndip i am having this same error please have you found a fix to this issue?