Sonia Garrouch
Sonia Garrouch
try this : `llm = HuggingFaceHub(repo_id="tiiuae/falcon-7b-instruct", task='text-generation', model_kwargs={'max_new_tokens': 200})`
> I am also facing Same Issue after compilation from source also, i am using python3.11.3, cuda 12.3 , torch 2.1.2, lion-pytorch 0.1.2with centos 7. Please help to resolve issue....
Try to use the Predictor class in the file [predict.py](https://github.com/haotian-liu/LLaVA/blob/main/predict.py) and replace the setup function of the class with this: `def setup(self) -> None:` ` self.tokenizer, self.model, self.image_processor, self.context_len =...
> > 我可以使用以下命令加载模型: > > ``` > > python -m llava.serve.cli --model-path /home/user/LLaVA/checkpoints/llava-v1.5-13b-task-lora --model-base /home/user/llava-v1.5-7b --image-file "/home/user/LLaVA/digital_tampering/test_data/image.jpeg" > > ``` > > When I use this command for inference, the...
> > Try to use the Predictor class in the file [predict.py](https://github.com/haotian-liu/LLaVA/blob/main/predict.py) and replace the setup function of the class with this: > > `def setup(self) -> None:` `self.tokenizer, self.model,...