Conan
Conan
 有没有试试这样的命令,在autodl上面
 大佬们这个错误是什么情况
Mine is also having this error and I'm running, qwen1.5-14b model
This was my mistake root@autodl-container-c438119a3c-80821c25:~/autodl-tmp# ollama serve 2024/05/20 11:28:20 routes.go:1008: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0...
@pdevine If you can, the big guy can also help look at this error
I've created a new issue and posted the relevant information #4529
 This is the GGUF file and the information for the imported model
Import from PyTorch or Safetensors See the [guide](https://github.com/ollama/ollama/blob/main/docs/import.md) on importing models for more information. This is the conversion I performed in llama.cpp using convert-hf-to-gguf and llm/llama.cpp/quantize with q4_0, as described...
root@autodl-container-c438119a3c-80821c25:~/autodl-tmp# ollama serve 2024/05/20 11:28:20 routes.go:1008: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"...
