localGPT icon indicating copy to clipboard operation
localGPT copied to clipboard

IndexError: list index out of range

Open rsrikaan opened this issue 2 years ago • 2 comments

(lg) C:\Users\rsrikaan\localGpt>python run_localGPT.py 2023-12-20 15:17:05,282 - INFO - run_localGPT.py:241 - Running on: cpu 2023-12-20 15:17:05,282 - INFO - run_localGPT.py:242 - Display Source Documents set to: False 2023-12-20 15:17:05,282 - INFO - run_localGPT.py:243 - Use history set to: False 2023-12-20 15:17:05,909 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large load INSTRUCTOR_Transformer max_seq_length 512 2023-12-20 15:17:07,439 - INFO - run_localGPT.py:59 - Loading Model: TheBloke/WizardCoder-Guanaco-15B-V1.0-GPTQ, on: cpu 2023-12-20 15:17:07,439 - INFO - run_localGPT.py:60 - This action can take a few minutes! 2023-12-20 15:17:07,439 - INFO - load_models.py:86 - Using AutoGPTQForCausalLM for quantized models 2023-12-20 15:17:08,368 - INFO - load_models.py:93 - Tokenizer loaded 2023-12-20 15:17:12,139 - INFO - _base.py:727 - lm_head not been quantized, will be ignored when make_quant. 2023-12-20 15:17:12,151 - WARNING - qlinear_old.py:16 - CUDA extension not installed. 2023-12-20 15:18:11,405 - WARNING - _base.py:797 - GPTBigCodeGPTQForCausalLM hasn't fused attention module yet, will skip inject fused attention. 2023-12-20 15:18:11,405 - WARNING - _base.py:808 - GPTBigCodeGPTQForCausalLM hasn't fused mlp module yet, will skip inject fused mlp. The model 'GPTBigCodeGPTQForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'LlamaForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'FalconForCausalLM', 'FuyuForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'MptForCausalLM', 'MusicgenForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PersimmonForCausalLM', 'PhiForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'WhisperForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM']. 2023-12-20 15:18:12,313 - INFO - run_localGPT.py:94 - Local LLM Loaded

Enter a query: hi Traceback (most recent call last): File "C:\Users\rsrikaan\localGpt\run_localGPT.py", line 282, in main() File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\click\core.py", line 1157, in call return self.main(*args, **kwargs) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\click\core.py", line 783, in invoke return __callback(*args, **kwargs) File "C:\Users\rsrikaan\localGpt\run_localGPT.py", line 256, in main res = qa(query) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\base.py", line 282, in call raise e File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\base.py", line 276, in call self._call(inputs, run_manager=run_manager) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\retrieval_qa\base.py", line 139, in _call answer = self.combine_documents_chain.run( File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\base.py", line 480, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\base.py", line 282, in call raise e File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\base.py", line 276, in call self._call(inputs, run_manager=run_manager) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\combine_documents\base.py", line 105, in _call output, extra_return_dict = self.combine_docs( File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\combine_documents\stuff.py", line 171, in combine_docs return self.llm_chain.predict(callbacks=callbacks, **inputs), {} File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\llm.py", line 255, in predict return self(kwargs, callbacks=callbacks)[self.output_key] File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\base.py", line 282, in call raise e File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\base.py", line 276, in call self._call(inputs, run_manager=run_manager) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\llm.py", line 91, in _call response = self.generate([inputs], run_manager=run_manager) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\chains\llm.py", line 101, in generate return self.llm.generate_prompt( File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\llms\base.py", line 467, in generate_prompt return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\llms\base.py", line 598, in generate output = self._generate_helper( File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\llms\base.py", line 504, in _generate_helper raise e File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\llms\base.py", line 491, in _generate_helper self._generate( File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\llms\base.py", line 977, in _generate self._call(prompt, stop=stop, run_manager=run_manager, **kwargs) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\langchain\llms\huggingface_pipeline.py", line 167, in _call response = self.pipeline(prompt) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\transformers\pipelines\text_generation.py", line 208, in call return super().call(text_inputs, **kwargs) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\transformers\pipelines\base.py", line 1140, in call return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\transformers\pipelines\base.py", line 1147, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\transformers\pipelines\base.py", line 1046, in forward model_outputs = self._forward(model_inputs, **forward_params) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\transformers\pipelines\text_generation.py", line 271, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\auto_gptq\modeling_base.py", line 422, in generate with torch.inference_mode(), torch.amp.autocast(device_type=self.device.type): File "C:\Users\rsrikaan\AppData\Local\anaconda3\envs\lg\lib\site-packages\auto_gptq\modeling_base.py", line 411, in device device = [d for d in self.hf_device_map.values() if d not in {'cpu', 'disk'}][0] IndexError: list index out of range

System details windows 10, Nvidia A2000 8gb mobile, 32gb ram

rsrikaan avatar Dec 20 '23 09:12 rsrikaan

Hello. I got this error as well. Make sure that you install all libraries. For me I had to run

sudo apt-get update sudo apt-get install libgl1-mesa-glx

To make sure everything was up to date. I was also missing libGL

mjcarbonell avatar Dec 20 '23 18:12 mjcarbonell

I got this error while trying to build with docker...

probitaille avatar Mar 01 '24 21:03 probitaille