LLaMA-Adapter
LLaMA-Adapter copied to clipboard
Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
When i run on a single GPU i get the following errors: /usr/local/lib/python3.8/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/usr/local/lib/python3.8/dist-packages/torchvision/image.so: undefined symbol: _ZN3c104cuda20CUDACachingAllocator9allocatorE'If you don't plan on using image...
see title,thanks
I have some problems fine-tuning llama_adapter on the ScienceQA dataset. I'm not quite sure how to write a prompt on the ScienceQA dataset according to the current template. Is it...
"AssertionError: Loading a checkpoint for MP=1 but world size is 2" when I set --nproc_per_node to 2. How to run on 2 16G gpu? because it OOM when inference. Thanks
Hi, I was wondering if it is possible to prompt the model with more than one image input since in the implementation the incorporation of the visual tokens is a...
Thanks for your wonderful work! I had a problem when fine-tuning the model. https://github.com/ZrrSkywalker/LLaMA-Adapter/blob/5f1b37e0e2f3ab2e423ea71234c89829fa271ad7/alpaca_finetuning_v1/llama/model.py#L80-L85 https://github.com/ZrrSkywalker/LLaMA-Adapter/blob/5f1b37e0e2f3ab2e423ea71234c89829fa271ad7/llama/model.py#L78-L83 The `self.n_local_heads` in training and inference are not the same, will this affect the deployment...
Hi, I want to run example.py in windows 11, but I get weird errors (sockets): (llama_adapter) C:\Users\jjovan\llama\ai\LLaMA-Adapter>python -m torch.distributed.run --nproc_per_node 1 example.py --ckpt_dir .\7B --tokenizer_path .\7B\tokenizer.model --adapter_path .\7B\ NOTE: Redirects...
After installing all dependencies, when I run the torchrun command I get this error: **raise RuntimeError("Distributed package doesn't have NCCL " "built in")** I can't figure out what am I...