Local LLM support
Hi, can you please provide a guide or support to use local llm models like Ollama lama3.1 8b or 70b
https://github.com/InternLM/lagent/pull/228/files
@Harold-lkk @sarkaramitabh300
is there any example model.py that using local ollama models ? I changed and add ollama.py to lagent llms. need example for mindsearch model.py
@Harold-lkk
ollama 3.1 8b serving and I have below errors:
JSONDecodeError: Expecting value: line 1 column 1 (char 0). Skipping this line. JSONDecodeError: Expecting value: line 1 column 1 (char 0). Skipping this line. JSONDecodeError: Expecting value: line 1 column 1 (char 0). Skipping this line. JSONDecodeError: Expecting value: line 1 column 1 (char 0). Skipping this line. JSONDecodeError: Expecting value: line 1 column 1 (char 0). Skipping this line. JSONDecodeError: Expecting value: line 1 column 1 (char 0). Skipping this line. JSONDecodeError: Expecting value: line 1 column 1 (char 0). Skipping this line. JSONDecodeError: Expecting value: line 1 column 1 (char 0). Skipping this line. ERROR:root:Exception in sync_generator_wrapper: local variable 'response' referenced before assignment Traceback (most recent call last): File "/home/bc/Projects/ODS/MindSearch/mindsearch/app.py", line 69, in sync_generator_wrapper for response in agent.stream_chat(inputs): File "/home/bc/Projects/ODS/MindSearch/mindsearch/agent/mindsearch_agent.py", line 235, in stream_chat print(colored(response, 'blue')) UnboundLocalError: local variable 'response' referenced before assignment
is there anyone succesfull with ollama 3.1 8b
Are there any updates about supporting Ollama?
FrontEnd use Streamlit. Edit the model_name of internlm_client in models.py to internlm2 and the url to http://127.0.0.1:11434. Then run the command python -m mindsearch.app --lang en --model_format internlm_client --search_engine DuckDuckGoSearch
Thank you
were anyone of you able to run with different ollama models successfully or using any other local setup, are follow ups working correctly ???. I tried , but i have only success with internlm2 model, rest of them are running into errors. Also using internlm2 7b seems to be hallucinating and running into loops.