Yifei Leng

Results 11 comments of Yifei Leng

@mymusise Thank you for your code. I can now load the LoRa generated from fine-tuning the LLama2-7b-chat-hf bin model. I've noticed that it performs quite consistently in generative tasks, but...

@SuperBruceJia Hello, I can now load the LoRa generated from fine-tuning the LLama2-7b-chat-hf bin model. I've noticed that it performs quite consistently in generative tasks, but when it comes to...

> > @SuperBruceJia Hello, I can now load the LoRa generated from fine-tuning the LLama2-7b-chat-hf bin model. I've noticed that it performs quite consistently in generative tasks, but when it...

> > 运行 python web_demo.py遇到同样的问题... File "/data/GLM/VisualGLM-6B/model/chat.py", line 19, in from sat.generation.autoregressive_sampling import filling_sequence, BaseStrategy ModuleNotFoundError: No module named 'sat' > > 安装完requirements.txt的依赖包就可以了。报错这里的sat是说sat库,不是sat模型 可requirements.txt我都安装好了,除了deepspeed库

> ModuleNotFoundError: No module named 'sat'指的是没有安装 SwissArmyTransformer ``pip install -i https://mirrors.aliyun.com/pypi/simple/ --no-deps "SwissArmyTransformer> =0.3.6" 安装即可! 好的多谢

> Hi @Senna1960321, could you check the `model_type` attribute of your model's `config.json`? It should be `"chatglm"`. I check the config.json, it's value is "model_type": "chatglm". @WoosukKwon

> Hi, @Senna1960321 I met the same error with version 0.2.6, have you solved the issue? No, I can't use ChatGLM3-6B yet, but I can use Llama2-7b. @BaileyWei

I also try another way: volume=/home/user/data docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data -it ghcr.io/predibase/lorax:latest --model-id /data/Llama-2-7b-chat-hf --max-input-length 1023 --max-total-tokens 1024 --max-batch-total-tokens 1024 --max-batch-prefill-tokens 1024 from lorax...

> may be looks like this : #51 @abhibst I have already tried this solution, but it still error. python check.py To find out how many clips Natalia sold altogether...