hua

Results 4 comments of hua

I got the same issue on WindowsOS after the last update.

server.log 2024/07/03 12:48:57 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:D:\\data\\OLLAMA_MODELS OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1...

To change the openai request format to the one supported by ollama, setting only requires the base_url parameter, for example, api_base: http://localhost:8000/v1 ``` from http.server import BaseHTTPRequestHandler, HTTPServer import json...

Is that OK? I would also like to switch to a local ollama supported model