randyadd163
Results
3
comments of
randyadd163
修改生成的infer里生成的参数max_length=2048,生成的效果也会变好。
@better629 This works for me. My environment is ``` Ubuntu 18.04 Nvidia A100(40G) * 2 CUDA Version: 11.6 Torch: 1.13.1 accelerate: 0.17.1 ```
I encountered a similar problem and was able to resolve it using this command. `torchrun --nproc_per_node 1 inference.py --ckpt_dir ./pyllama_data/7B --tokenizer_path ./pyllama_data/tokenizer.model` with A100(40G)*2